This Droplet 1-Click is pre-installed with Conda and Jupyter, providing you with a streamlined environment for data science and machine learning tasks. Conda allows you to easily manage software packages and create isolated environments, while Jupyter provides a web-based interface for interactive coding and data exploration. With this image, you can quickly set up a powerful data science environment without the hassle of manual installation.
Package | Version | License |
---|---|---|
JupyterLab + Jupyter | 4.0.1 | |
Miniconda | 23.1.2 |
Click the Deploy to DigitalOcean button to create a Droplet based on this 1-Click App. If you aren’t logged in, this link will prompt you to log in with your DigitalOcean account.
In addition to creating a Droplet from the Jupyter Notebook 1-Click App using the control panel, you can also use the DigitalOcean API. As an example, to create a 4GB Jupyter Notebook Droplet in the SFO2 region, you can use the following curl
command. You need to either save your API access token) to an environment variable or substitute it in the command below.
curl -X POST -H 'Content-Type: application/json' \
-H 'Authorization: Bearer '$TOKEN'' -d \
'{"name":"choose_a_name","region":"sfo2","size":"s-2vcpu-4gb","image": "sharklabs-jupyternotebook"}' \
"https://api.digitalocean.com/v2/droplets"
Before deploying this Droplet, consider the following guidance to ensure you choose the right configuration for your needs:
This Droplet comes with the following software versions pre-installed:
After deploying the Jupyter Droplet, follow these steps to get started:
SSH into your Droplet:
Switch to the Ubuntu user:
su - ubuntu
Accessing the Jupyter Notebook:
notebook.sh
script.conda activate jupyter
jupyter lab
* Alternatively, you can use the `notebook.sh` script by executing:
./notebook.sh
* The script will handle starting the JupyterLab instance for you.
Creating and Managing Conda Environments:
Once you have accessed the Jupyter Notebook, you can use Conda to create isolated environments for your projects.
To create a new Conda environment, use the following command:conda create --name myenv
Activate the environment by running:conda activate myenv
Install required packages and libraries within the environment using Conda or pip.
For more detailed instructions and tips on using Conda and Jupyter, please refer to the Conda documentation: https://docs.conda.io/
If you have small compute, skip this and go to example 2. If you run this on <16 vCPU, the process will likely get killed.
Source: Link
A sample application included with this Droplet is the “stable diffusion 1.5” model. To run this application, ensure that your Droplet meets the following specifications:
Please note that running the “stable diffusion 1.5” model can be computationally intensive. In our tests, it takes approximately 1 minute to generate a single image. Commands:
su - ubuntu # if you have not switched to ubuntu user already
cd examples/stable_diffusion.openvino/
conda activate stable-diffusion-1.5
python demo.py --prompt "Beautiful lake, sunset, and a mountain"
The output is stored in output.png file. If you have connected to Jupyter notebook, you should be able to view the file in the notebook itself.
Source: Link
Intel maintains a number of sample applications that you can try out and customize. These applications have been optimized to work on Intel hardware. Note that some of these may need a GPU and will not work on CPU.
This tutorial guides you through running the DistilBERT sequence classification in a JupyterLab environment.
Follow the steps below:
Navigating to the Notebook:
/home/ubuntu/examples/openvino_notebooks/notebooks/229-distilbert-sequence-classification
* Open the `229-distilbert-sequence-classification.ipynb` notebook.
Selecting the Kernel:
Running the Notebook:
NOTE:
During the optimization step, you may encounter an error stating “mo command not found”. This issue is known to us and can be resolved by replacing mo
with its full path: /home/ubuntu/.conda/envs/openvino_notebooks/bin/mo
. The permanent fix would be to update the path in the conda activation script.
The sentiment analysis typically takes around 0.1 seconds, demonstrating the efficiency of the model on CPU.
If you encounter issues during the process, you can refer to the following resources:
Readme.txt
file located in the /home/ubuntu
directory. It contains common tips and potential solutions for frequently encountered issues.Refer to the save-9GB-by-deleting-examples
file in the /home/ubuntu
directory for instructions on how to perform this cleanup.