Background
Though I thought I knew AWS well enough, I found it rather time-consuming to dive deep and tame those mysterious errors, costing me several long evenings. In response, I turned to ChatGPT to help me debug the tedious error messages I encountered. Long story short, ChatGPT significantly reduced the time it took to nail down the problems. I couldn't have been happier when all the tests turned green and the deployment process was fully managed by code (Infrastructure as Code) in a single repository.
With this experience, I felt like I had learned a bit about prompt engineering in software testing and debugging. I decided to automate what I learned with agents that could do it for me. Here is my first AI-generated project — it does not do anything fancy, but this is a project that I did not write a single line of code.
The Project
I did not write a single line of code for the project itself, what I did was to have some simple agents. Here's what I asked AI helpers to do:
Use Poetry to manage my project packages.
Provide placeholder endpoints so that a junior developer can easily add new endpoints in the right place.
Write tests so that the new junior developer can confidently add new endpoints or functionalities.
Initially, the AI used SQLite as the development database, but I preferred PostgreSQL. So, I guided it to use PostgreSQL and, because of that, Docker and Docker Compose to manage the full development cycle.
If you are interested in building developer productivity platforms or are an expert in developer full-cycle management, I'd love to hear your suggestions on this specific point, whether there is a better way managing the full cycle from writing code to testing code to deploying the code to production
And here is a loyal and hardworking AI tech-lead’s result AI-Generated FastAPI App.
Who is this article (and likely the following articles) for?
If you are new to learning FastAPI, this project could be a great way to "learn through project-based learning," where you learn from an AI tech lead instead of spending hours or even days "learning through reading the docs."
If you are interested in how AI can help your daily software engineering workflow, this is also a good start. I am not sure how much bandwidth I have to keep writing, as I am building a startup, but this kind of writing serves as a good technical diary for me to remember and refer back to from time to time. If you think it's interesting to follow what I learn from having AI as my tech lead, it may not be a bad idea to read along.
Next..
Please share your feedback and suggestions, both from a FastAPI expert perspective and from an AI agent builder perspective. If you enjoy this article and the AI-generated project, please subscribe and stay in touch.
This uses python to load pdfs and then the anthropic llm to generate a course syllabus. Then, the syllabus is fed back to the llm with the pdf data to generate lesson plans and quizzes in json, etc. with each stage feeding off the next. What is great is it regionalizes the data for the country I will be teaching, Timor Leste, automatically.
https://github.com/ddtraveller/TEFLTools/blob/main/generate_course.py
This script uses google translation service, speech recognition and gpt4all to make a chat bot that can speak and comprehend pretty well;
https://github.com/ddtraveller/TEFLbot/blob/main/src/main.py
This version loads data from files related to the question and sends that in the prompt to assist the answer. You could load an api reference doc and have the llm read it all for you and answer your human question;
https://github.com/ddtraveller/TEFLbot/blob/main/src/main_RAG.py
It would be helpful to see a readme on the git repo you have.
https://yattishr.medium.com/unleash-the-power-of-local-open-source-llm-hosting-e33bf6a9679f
Interesting model manager.
The WebUI (Web User Interface) tool described in the guide allows users to interact with and manage open-source language models locally. Here are the key functionalities it provides:
Model Management:
Download various open-source language models directly through the interface
Load and switch between different models
Manage model files and configurations
Text Generation:
Interact with the loaded language model in a chat-like interface
Generate text based on prompts or questions you provide
Parameter Tuning:
Adjust various settings and parameters that affect the model's performance and output
Toggle options like CPU usage, batch sizes, and sampling methods
Fine-tuning:
Access tools for fine-tuning models on custom datasets
Customize the model for specific use cases or domains
Performance Monitoring:
View resource usage and model performance metrics
Multiple Interface Modes:
Chat mode for conversational interactions
Notebook mode for more structured input/output
Instruct mode for giving specific instructions to the model
API Access:
Potentially expose the model as an API for integration with other applications
Model Comparison:
Load multiple models and compare their outputs side-by-side