Running Jupyter Notebook in cloud with GPU in 30 seconds

by Nik Paushkin, CEO
Last updated on 15 Apr, 2022

In this article, we’re considering several basic cases of running a Jupyter Notebook remote server with either classic or modern JupyterLab user interface. We'll show how easily you can run it entirely online and boost it with one or multiple cloud GPUs using the appropriate Jupyter Notebook application on Puzl.

Simply run Jupyter Notebook

  1. Go to your Puzl dashboard and open Marketplace, the section that allows you to run cloud native applications in a couple of clicks. There are not many applications in the Puzl Marketplace yet, so just click the Jupyter Notebook and press the “Install” button.

Swipe image to play

  1. Choose which user interface you prefer to install: classic Jupyter Notebook vs JupyterLab. Since the new generation JupyterLab UI has many more features, it is recommended and set by default.

  2. Enter a password, which you will use to access your Jupyter Notebook interface after installation.

  3. Choose the environment (Docker image) you need to run your Jupyter Notebook. If you use Jupyter for machine learning tasks, you can choose an image with Tensorflow or Pytorch or any other ML framework. Jupyter supports many runtimes (kernels) such as Python, R, etc. To run Jupyter Notebook with pre-installed R kernel use "R notebook" Docker image.

  4. Optionally, you can configure the resources for your Jupyter Notebook server using advanced options. The lowest recommended amounts are 1 vCPU and 4GB of RAM, which are set by default. The hourly price for your cloud Jupyter Notebook app is calculated based on these amounts and is 0.023 EUR per hour for the recommended minimum configuration.

  5. Set the size of a fast NVMe based persistent volume, which you will use to store your datasets and results of the explorations. The volume will be created automatically. Please notice that the volume cost is calculated separately from the hourly price mentioned in point 5.

  6. Finally press “Install”. Wait no more than 2 minutes, and your Jupyter Notebook will be started. You can open the user interface using a link provided on your running Jupyter Notebook application page.

Swipe image to play

Install Jupyter Notebook extensions in several clicks

  1. All Puzl's Docker images contain the necessary dependencies to allow you to install Jupyter Notebook extensions using its graphical interface only. All the estensions you install are stored on the volume that you set in step 6 of the previous chapter.

Swipe image to play

Load your dataset first then run Jupyter Notebook with GPU

Most machine learning tasks require a large amount of data. If you would like to reduce your costs, you probably wouldn’t rent a lot of computing resources (CPU, RAM, GPU) for the time of data loading. For this case, Puzl gives you an ability to load your data first easily, and then request the amounts of CPU, RAM and GPU required for your computations.

  1. To load data, you need to set up access to the volume. To do that, stop your Jupyter Notebook application, which has been launched in the first paragraph. Once your Jupyter Notebook server is stopped, the volume is freed. Now, you can access it via SSH using a pod with the smallest resources requests: 128MB of RAM and 0.1 vCPU only.

  2. To access the volume, go to the volume’s page and click the “Setup SSH access” button. Your pod with the SSH server will be ready in several seconds (don’t forget to add your SSH public key in the Security section). Once your pod is ready, you can connect to your volume via SSH and load data using scp, wget or rsync. See how to use rsync to transfer a large amount of data in this article.

Swipe image to play

  1. When you are done with data loading, just delete your SSH pod using the “Delete Deployment” button on the pod’s page. Once the pod is deleted, your volume is available to be mounted back to your Jupyter Notebook application.

  2. Go to your Jupyter Notebook application page and click the "Resume" button. In the dialog pick the option "Resume with different settings". When you will be redirected to the upgrade page, choose the volume, where you uploaded all necessary data from the picker of existing volumes. Then press "Show advanced settings" and choose the needed amount of GPUs. Finaly, press the green "Upgrade" button to run your Jupyter Notebook with GPU.

Swipe image to play

Now, your Jupyter Notebook is up and running with GPU and your data accessible in its interface.

What’s next?

In the near future, we plan to implement pipelines to give you an ability to run your Jupyter Notebooks using CI/CD flow to organize the MLOps processes. For now, you can set up the pipelines yourself with our Gitlab runner application or using native Kubernetes API.

Also, we are going to provide configurable environments, which means that your Jupyter Notebook might be automatically built and launched with all necessary dependencies if you describe them.

Don’t forget to subscribe to our Twitter to not miss the important updates!