How to Make a Interactive Machine Learning Model for FREE with Hugging Faces
Have you ever wanted to make your machine learning model interactive? Have you ever wanted to build your AI portfolio, but not interesting in keeping it inside a Jupyter notebook and a GitHub repository?
Well worry no more! Deploying your interactive machine learning model with Hugging Face offers a straightforward and effective way to make your models accessible to a wider audience, enabling users to interact with your AI creations in real-time. Hugging Face, a platform renowned for its comprehensive repository of pre-trained models and tools in natural language processing (NLP) and beyond, also provides an infrastructure for deploying machine learning models easily.
Here’s an introduction to deploying your interactive machine learning model with Hugging Face:
1. Preparing Your Model
Before deployment, ensure your machine learning model is well-trained, tested, and saved in a format compatible with Hugging Face. For most models, this means converting them into a format like PyTorch, TensorFlow, or ONNX.
In this example, we will be using a simple image classification model that identifies cats from dogs created from Jeremy Howard’s fast.ai course. You can run the entire notebook and export a simple model.pkl
file which will contain the machine learning model you will deploy.
2. Creating a Hugging Face Account and Space Repository
Sign up for a Hugging Face account if you haven’t already. It is free and very straightforward.
In addition, please enable Git over SSH with the provided steps. We need Git in order to upload our model and content to Hugging Faces for deployment.
If you don’t know what Git is or where to get started, this is the perfect time to learn! Check out my free other article going through step by step on how to set up Git and GitHub for personal use.
Once registered, create a new space for your model. This space repository will host your model files and any associated metadata, making it accessible for deployment.
In the space, select the Gradio Space SDK. Gradio is a Python library that allows you to quickly create customizable web apps for your machine learning models and data processing pipelines. Optionally, you can select the “Apache 2.0” License which allows others to use, modify, and distribute the licensed software, including creating derivative works, without requiring those derivative works to be licensed under the same terms. Since you will probably not be making money off this space, I suggest this so others can use it for free :)
Assuming you have set up Git correctly on your computer, copy the SSH clone command after opening up the page and paste it into a terminal on your computer. Do NOT copy the HTTPS link, if you do, you will not be able to upload on Hugging Faces. (I’m not sure why the link is still available)
Congratulations, you’re ready to start coding and uploading your model to Hugging Faces through Gradio!
3. Preparing to Deploy your Model to Gradio
Navigate to your repository through your computer’s terminal. You will see a README and a .gitattributes file in your repository.
In this folder, upload your export.pkl
file.
Next, in the terminal, write out the following commands to create a virtual environment to keep your libraries and dependencies in.
# Create a Virtual Enviornment called "venv"
python -m venv venv
# Activate the Virtual Enviornment "venv"
source venv/bin/activate
Then, in the same file directory, create a requirements.txt
to keep your dependencies.
Copy and paste the following contents to the text file:
fastai
graphviz
ipywidgets
matplotlib
nbdev>=0.2.12
pandas
scikit_learn
azure-cognitiveservices-search-imagesearch
sentencepiece
torch
gradio
numpy
jupyter
Back in the terminal, run the following command to install all the dependencies needed to deploy your project:
# Installs every dependency listed in requirements.txt
pip install -r requirements.txt
After installing all your dependencies, run the following command to open Jupyter Notebook and create a new notebook and call it app
4. Building Your Interactive Web App
Now you’re ready to create and upload your machine learning model app! If you are unfamiliar with how Jupyter notebook works, I suggest going through this notebook on Kaggle (again from fast.ai)
While we did create a notebook for this project, Hugging Faces only accept .py
files. While we can work in a .py
file, working in notebooks is a better way to develop your model and code as we can iteratively improve rather than blindly writing code into a .py
that we cannot execute until the very end.
However, to make this process very simple, we will be use nbdev
: a library to make exporting code from a notebook into a python file very easy.
In the first cell, copy and paste in the following command that tells the notebook that we have some Python code we want to export.
#|default_exp app
Next, we will import all the libraries necessary to create our interactive web app.
#|export
from fastai.vision.all import *
import gradio as gr
print(gr.__version__)
Note: you MUST include the #|export
portion as this tells nbdev
that we want to export all the contents in the cell.
Then, we can upload our imported model with the following command.
#|export
learn = load_learner('model.pkl')
The model.pkl
parameter in load_learner
may be different depending on where your model is located relative to where the notebook is located.
Optionally, you can run the following commands to ensure your model is working as intended. The 'dog.jpg'
file is a file you can download off the internet. In theory, it should open up and display the same picture of the dog you downloaded onto the notebook.
im = PILImage.create('dog.jpg')
im.thumbnail((192,192))
im
# TensorImage([Dog, Cat])
learn.predict(im)
Next, we must create a functions to tell our display and model what the two choices are when identifying the animal in a picture.
#|export
# We are saying 'cat' is true and 'dog' is false.
def is_cat(x): return x[0].isupper()
categories = ('Dog', 'Cat')
def classify_image(img):
pred,idx,probs = learn.predict(img)
return dict(zip(categories, map(float,probs)))
You can test to make sure your dog (or cat) jpg is identified correctly.
classify_image(im)
Now finally, you can create the interface with your machine learning model! The items in the examples
list are downloadable images off the internet you can use as examples.
#|export
examples = ['dog.jpg','cat.jpg','cat-dog.jpg']
inft = gr.Interface(fn=classify_image,inputs="image",outputs="label",examples=examples)
inft.launch(inline=False,share=True)
And lastly, we need to actually export all the wanted code to app.py
which Hugging Faces will read!
import nbdev
nbdev.export.nb_export('app.ipynb', '.')
After running your entire notebook, your code should look something like this:
# AUTOGENERATED! DO NOT EDIT! File to edit: app.ipynb.
# %% auto 0
__all__ = ['learn', 'categories', 'examples', 'inft', 'is_cat', 'classify_image']
# %% app.ipynb 2
from fastai.vision.all import *
import gradio as gr
print(gr.__version__)
def is_cat(x): return x[0].isupper()
# %% app.ipynb 4
learn = load_learner('model.pkl')
# %% app.ipynb 6
categories = ('Dog', 'Cat')
def classify_image(img):
pred,idx,probs = learn.predict(img)
return dict(zip(categories, map(float,probs)))
# %% app.ipynb 9
examples = ['dog.jpg','cat.jpg','cat-dog.jpg']
inft = gr.Interface(fn=classify_image,inputs="image",outputs="label",examples=examples)
inft.launch(inline=False)
5. Upload Your Interactive Web App to Hugging Faces
Now we can upload our code to Hugging Faces. If you’re familiar with the process of add and commiting files, this should be relatively straightforward.
But not so fast! Before you go ahead and upload your project, you need to enable Git LFS to your project since the model.pkl
will be bigger than the maximum file size of a normal commit (10MB).
First off, download and install the Git command line extension. Once downloaded and installed, set up Git LFS for your user account by running:
git lfs install
In each Git repository where you want to use Git LFS, add the file types you’d like Git LFS to manage. I also suggest you add the model.pkl
directly since sometimes Git doesn’t pick up the *.pkl
You can do so with the following commands:
git lfs track "*.psd"
git lfs track "model.pkl"
In .gitattributes
, it should look like this:
*.pkl filter=lfs diff=lfs merge=lfs -text
model.pkl filter=lfs diff=lfs merge=lfs -text
Now you can follow the normal series of commands to upload code to a repository.
# Just an abbreviated step to add all. I personally like to add files one at a time
git add -A
git commit -m "Full Deployed Machine Learning Model"
git push
After a couple of minutes, you should have a fully interactive machine learning model on your repository app!
Outside of the example images you put in the notebook, you should be able to upload your own images.
Here is the link to the full repository which you should have built at the end of this article.
6. Conclusion
Deploying your interactive machine learning model with Hugging Face simplifies the process of making your AI solutions accessible to a global audience. By following these steps, you can create a space where users can directly interact with your models, providing valuable feedback and real-world testing scenarios.
Outside of a cat and dog classifier, you can use this to create and upload your own machine learning model, such as this Face Mask Detection app I created.
Whether you’re a researcher, developer, or hobbyist, Hugging Face’s deployment tools simplifies the journey of bringing AI solutions from concept to a wide audience, bridging the gap between complex machine learning algorithms and practical, everyday applications.
I hope you enjoyed this tutorial, leave a comment if you have any questions or have your own model you want to show off.
Happy coding!