In addition, you can also see previous posts where I show how to use TensorFlow from Smalltalk to recognize objects in images.
In this post, I will show you how to get started with the Jetson Nano, how to run VASmalltalk and finally how to use the TensorFlow wrapper to take advantage of the 128 GPU cores.
What do you need before starting
Below is the whole list of supplies I gathered:
- NVIDIA Jetson Nano Developer Kit
- MicroSD card: I got a Samsung 128GB U3
- Power supply: you can choose either a limited USB or a much more powerful DC switching power supply. I got the latter.
- Case (optional): I like metal cases so I got one one with Power & Reset Control Switch.
- Fan (optional): I could only find one fan that would fit on the Nano but it was very expensive.
- Wireless Module (optional): the board does not come with built-in WiFi or Bluetooth so I decided to buy this module.
Assembling the Nano and related hardware
I started by formatting the SD. For this, I always use “SD Card Formatter” program. Downloading the operating system image and flashing the SD was easy… But the first downside is that for the first boot you NEED an external monitor, keyboard and mouse. No way to do it headless 🙁 After the first boot, you can indeed enable SSH and VNC, but not for the first time.
The next step was to assemble the Wifi and Bluetooth. It was not a walk in the park but not that difficult either. You need to disassemble the Nano a bit, connect some wires, etc:
Something bad is that the board is configured to start by default with a USB power supply. In my case, I ordered a DC instead as it’s better if you want to take the maximum power. But…to tell the Nano whether to use USB or DC, you must change the jumper J48. But guess what? The development kit (100USD) does NOT even bring a single jumper. So you got your Nano and your DC power supply and you are dead, you can’t boot it. Seriously? (BTW, before you ask, no, I didn’t have a female-to-female cable with me that day as a workaround for the jumper)
The other complicated part was to assemble the case and the fan. For that, I needed to carefully watch this video a few times. Once it was built, it felt really solid and nice. BTW the case did come with a jumper for J48 which was really nice since that meant I could use the DC power supply.
The fan itself was also complicated. The Noctua NF-A4x20 5V PWM I bought wouldn’t fit easily. The NA-AV3 silicone anti-vibration mounts would not get through the holes of the Nano. And the screws for the fan provided by the case were too short. So I had to buy some other extra screws that were long enough.
When I was ready to try the fan, I powered it and nothing happened. I thought I did something wrong and I had to re-open the case a few times… painful process. I almost gave up, when I found some help over the internet. Believe it or not, you must run a console command in order to start the fan:
sudo jetson_clocks. After that, it started working.
Setting it to run headless
While in most boards and operating systems this is easy, on the Nano this is a challenging part. The SSH part is easy and you almost don’t need to do anything in particular. But for VNC… OMG…. I followed all the recommendations provided in this guide. In my case, I could never get the
xrdp working… when it tries to connect from my Mac, it simply crashes…
As for VNC, after all the workarounds/corrections mentioned there, I was able to connect but the resolution was too bad (640×480). I spent quite some time googling until I found a workaround mentioned here. Basically, I did
sudo vim /etc/X11/xorg.conf and I added these lines:
Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" Device "Default Device" SubSection "Display" Depth 24 Virtual 1280 800 EndSubSection EndSection
In other words, I needed to change the size of the Virtual Display that is used if no monitor is connected (by default it was 640×480).
After rebooting, I was finally able to get a decent resolution with VNC.
Installing VASmalltalk dependencies
This part was easy and I basically followed the bash script of a previous post:
# Install VA Dependencies for running headfull and VA Environments tool sudo apt-get install --assume-yes --no-install-recommends \ libc6 \ locales \ xterm \ libxm4 \ xfonts-base \ xfonts-75dpi \ xfonts-100dpi # Only necessary if we are using OpenSSL from Smalltalk sudo apt-get install --assume-yes --no-install-recommends \ libssl-dev # Generate locales sudo su echo en_US.ISO-8859-1 ISO-8859-1 >> /etc/locale.gen echo en_US.ISO-8859-15 ISO-8859-15 >> /etc/locale.gen locale-gen exit
Installing TensorFlow and VASmalltalk wrapper
The first thing you must do is to either build TensorFlow from scratch for Nvidia Jetson Nano with CUDA support or try to get a pre-build binary from somewhere. I am getting the latter using the following bash script:
mkdir tensorflow cd tensorflow wget https://dl.photoprism.org/tensorflow/nvidia-jetson/libtensorflow-nvidia-jetson-nano-1.14.0.tar.gz tar xvzf libtensorflow-nvidia-jetson-nano-1.14.0.tar.gz cd lib ln -s libtensorflow_framework.so libtensorflow_framework.so.1
The symbolic link is a workaround because in the shared libraries that I downloaded,
libtensorflow.so would depend on
libtensorflow_framework.so.1 but the library that was shipped was
libtensorflow_framework.so and so I made a symlink.
To install VASmalltalk and the TensorFlow wrapper, I followed the instructions from the Github repository. The only detail is that ARM64 VM will be shipped in the upcoming 9.2 ECAP 3….so send me a private message and I will send it to you until the release is public.
.ini file I added:
The last bit is that TensorFlow needs help so that
libtensorflow can find
libtensorflow_framework. So what I did is to export
LD_LIBRARY_PATH before starting the VASmalltalk image. Another possibility is moving the shared libraries to
/usr/local/lib. It’s up to you.
cd ~/Instantiations/VastEcap3_b437_7b7fc914f16f_linux/raspberryPi64/ export LD_LIBRARY_PATH=/home/mariano/Instantiations/libtensorflow-nvidia-jetson-nano-188.8.131.52/lib:$LD_LIBRARY_PATH ./abt64.sh
And all tests were green:
Confirming we are using GPU
By default, if a GPU is present (and the shared library was compiled with GPU support), TensorFlow will use GPU over CPU. From Smalltalk we can confirm this by checking the available
TFDevice by inspecting the result of
(TFSession on: TFGraph create) devices
You can then run a simple test like
TensorFlowCAPITest >> testAddControlInput and see the log printed into the
xterm. You should see that a GPU device is being used:
Using 128 GPU cores, TensorFlow and VASmalltalk to detect Kölsch beers with #esug19 pictures
OK. So we have TensorFlow running, all our tests passing and we are sure we are using GPU. The obvious next step is to run some real-world demo.
In a previous post you saw some examples of Object Detection. During the ESUG 2019 Conference I wanted to show this demo but instead of recognizing random objects on random images, I showed how to detect “beers” (Kölsch! we were at Cologne, Germany!) on the real pictures people uploaded to Twitter #esug19 hashtag.
The code for that was fairly easy:
ObjectDetectionZoo new imageFiles: OrderedCollection new; addImageFile: '/home/mariano/Instantiations/tensorflow/esug2019/beer1.png'; graphFile: '/home/mariano/Instantiations/tensorflow/frozen_inference_graph-faster_resnet50.pb'; labelsFile: 'examples/objectDetectionZoo/mscoco_label_map.pbtxt'; prepareImageInput; prepareSession; predict; openPictureWithBoundingBoxesAndLabel
And here are the results:
The Nvidia Jetson Nano does offer good hardware at a reasonable price. However, it’s much harder to setup than most of the boards out there. It’s the first board that takes me sooooo much time to get fully working.
But the worst part, in my opinion, is the state of Linux Tegra. Crashes everywhere and almost impossible to setup something as simple as VNC. I would really like to see a better/newer OS for the Nano.
Once all your painful setup is done, it works well and it provides nice GPU capabilities. We now have everything in place to start experimenting with it.
PS: Thanks Maxi Tabacman and Gera Richarte for doing a review of this post!