Anybody doing any AI experiments for the XRP? Anybody doing any work using LLM’s for the XRP?
I know DigiKey has done some things that may be helpful, but to my knowledge, no one has done any LLM stuff with the XRP yet. Perhaps you could be the first?
- Object detection with a Google Coral
- Voice recognition with a DFRobot voice recognition board
- Raspberry Pi 5 coprocessor
- DigiKey isn’t working on any AI things here (at least, not to my knowledge), but you could certainly use the Pi 5 as an AI coprocessor (perhaps with the AI camera or AI Hat with Hailo accelerator) and send the results of it to the XRP.
Hope this helps!
Thank you so much for your input on this. I have been doing some research around collecting information.
There is also a project at Tufts University where they are using ChatGPT to write programs for the XRP in Python. It is an internal project, since the API has to be paid for. But, they did generate some pretty good pre-prompt text about the XRP so it could better write programs for the XRP. It does a pretty good job.
Thank you for that. Do you have a link to their project?
The stock XRP doesn’t have the complex inputs for which an ML solution, much less an LLM, is appropriate. The easiest thing that would create such complex signals is video. For that, especially for learning, you’d probably juse use small convolutional kernels, not transformers (which require more memory and have similar or marginally better accuracy).
If you want to do ML “on” the XRP (actually running on your dev machine), you probably don’t want to use LLMs. If you want to use LLMs “with” the XRP, you can probably add spoken command or try “vibe coding” with something like Cursor or Copilot+.
My work is in computer vision and I am going to add that to the XRP curriculum for next Fall. It is the middle of the FIRST Robotics season right now and that is the focus of the students with whom I volunteer.
If you’re interested in computer vision in robotics, there are many resources.
The other aspect of robotics for which ML is very interesting is reinforcement learning for complex motion or adjusting to the physical environment. The stock XRP doesn’t have a complex enough arm to demonstrate those challenges, though, so again you’re looking at extending the basic sensors and effectors.
Thank you so much for the response and the advice in how to proceed next.
Another interesting thing I’ve done is to use Google Gemini AI Studio to help me program the robot to do various tasks. You can get to it here: https://aistudio.google.com/live. I was able to run it in one window, then by clicking on “stream realtime” in the sidebar, then “share your screen” I was able to get it to look at the XRPCode window and give me programming advice. It watches what I’m doing and I could ask it for help, like how to follow a line. I was using blockly, so it was suggesting what blocks I should be using. I found it to be pretty amazing, although not always correct. So as they say, your mileage may vary. It’s fun to play with it to see what’s coming down the road. it was free with my account, and I’m not sure if you have to pay for it with other types of accounts.
I would love to see a vision-based ML project on the XRP, please make a post here when you have things to share!
Also, I’m pretty sure you could do a decent ML application with the IMU data. For example, could try to detect when the XRP is picked up, or falling off a table, or bumps into a wall. Obviously those are things that could be done with a more “classic” algorithm, but if the goal is to do a simple ML project, something like that could work great!
Thank you for the post. Lots of different ideas to work on.