Make own AI robot : FROGIFY!

Me

I've been dealing with computers since I was two years old. I became interested in electronics with an advertisement I randomly saw in 2016. After 2020, my interested shifted to software development. I have many projects both as hardware and software.


The Frogify

This is a sweet AI powered robot. I was inspired by the cleaner robot in Wall-E.



The Mode List


Chat mode : In chat mode, Frogify uses OpenAI ChatGPT, STT (Speech-to-text) and TTS (Text-to-speech) technologies. You can talk with Frogify such as a friend in this mode.

Code mode : Frogify helps you for write own codes. It creates files and places your code. If you want to use code mode, you need the desktop app of Frogify. 

Dance mode : You can create dances on desktop app or use local dances for Frogify. 

Assistant mode : In this mode, Frogify uses Google Assistant. The desktop app sends audio from microfone to Raspberry Pi and Raspberry Pi sends answer to robot and desktop-app. Desktop-app plays this incoming sound and the robot moves based on answer.


You can change mode with voice commands or from desktop-app. 


The Parts of Frogify

Frogify has got three parts:


  • Robot
  • Controller
  • Desktop-app


Robot Part : This is the robot in the picture. This part uses NodeMCU module as brain. NodeMCU gets data from controller and process data. It uses SH1106 128x32 I2C OLED Screen for eyes, SG90 servo motors for change directions of arms strong> 6V 400RPM Reduced DC Motors for move the robot. NodeMCU uses motors with L293D (You can change it with L293B). It uses 3.7V 950 mAh LiPo Battery for power strong>TP4056 Type-C Module for charge it. Servo motors working with 5V, so I added a voltage booster card (MT3608). NodeMCU use these sensors based on data.




Controller Part : All software work takes place here. This part uses Raspberry Pi module as brain. RPi uses NodeJS. In chat mode, RPi takes the STT data from desktop-app and makes a request to OpenAI ChatGPT. Then, it sends answer to robot and desktop-app. Robot moves based on this and desktop-app translates answer to speech with TTS. It uses Azure Speech Module for TTS and STT. Code mode works very similar to the previous mode. Only difference, desktop-app doesn't translate it, instead of it creates files and puts codes in the answer. In dance mode, RPi gets dance data from desktop-app and sends it to robot. If a song is neccessary, desktop app plays it. In assistant mode (That's so hard to me :D), desktop app send sound from microphone to RPi. RPi plays this sound to Google Assistant and sends answer to desktop-app and data to robot. Robot moves based on data and desktop-app plays answer.


Desktop-app : All modes starting from here. You can create dances with music, initalize new eyes from app. It's neccessary to project.


PCB and 3D parts




The PCB Card of Robot


Robot design


I don't have a hot air rework station, so I need PCB assembly for SMD parts. And I need 3D printing service for the parts of my robot.


The result



Support from PCBWay would be an enormous advantage to my project and make the project better.


Apply for sponsorship >>
1600+ Projects Sponsored
Apr 06,2023
901 viewsReport item
  • Comments(2)
  • Likes(4)
You can only upload 1 files in total. Each file cannot exceed 2MB. Supports JPG, JPEG, GIF, PNG, BMP
0 / 10000