13 Feb 2025, University of Wollongong, GiftCity, Gandhinagar
- Basic LLM Calls β Sending requests and processing responses
- Structured Output β Formatting responses in structured formats
- Tool Use β Integrating external tools with the LLM
- Retrieval β Using memory and external sources for better responses
- Prompt Chaining β Structuring multi-step AI tasks
- Routing β Directing requests to specialized handlers
- Parallelization β Running multiple AI processes simultaneously
- CrewAI Framework Step by Step
- Agent : Finance Agent App
- SmolAgent Framework Step by Step
- Agent : AI News Agent App
Prompt chaining breaks down complex AI tasks into smaller, more manageable steps. Each step processes and validates the output from the previous step, improving control and reliability.
graph LR
A[User Input] --> B[LLM 1: Extract]
B --> C{Gate Check}
C -->|Pass| D[LLM 2: Parse Details]
C -->|Fail| E[Exit]
D --> F[LLM 3: Generate Confirmation]
F --> G[Final Output]
π Step Breakdown:
Step | Description |
---|---|
β Step 1: Extract & Validate | - Detects if input is a validcalendar request - Assigns a confidence score - Filters out irrelevant requests |
β Step 2: Parse Details | - Extracts structured information (date, time, participants) - Converts natural language into a structured format |
β Step 3: Generate Confirmation | - Creates auser-friendly response - Generates calendar links if necessary |
Routing is a pattern that directs different types of requests to specialized handlers, allowing for clean separation of concerns and optimized processing.
graph LR
A[User Input] --> B[LLM Router]
B --> C{Route}
C -->|New Event| D[New Event Handler]
C -->|Modify Event| E[Modify Event Handler]
C -->|Other| F[Exit]
D --> G[Response]
E --> G
π Step Breakdown:
Component | Description |
---|---|
β Router | - Classifies requests intonew event or modification - Provides confidence scoring |
β Specialized Handlers | -New Event Handler: Creates calendar events - Modify Event Handler: Updates existing events |
Parallelization improves efficiency by running multiple LLM calls simultaneously to analyze different aspects of a request in parallel.
graph LR
A[User Input] --> B[Calendar Check]
A --> C[Security Check]
B --> D{Aggregate}
C --> D
D -->|Valid| E[Continue]
D -->|Invalid| F[Exit]
π Step Breakdown:
Component | Description |
---|---|
β Parallel Checks | - Calendar Validation:Β Ensures a valid request - Security CheckΒ : Screens for prompt injection |
β Aggregation Layer | - Merges results from parallel checks - Makes thefinal validation decision |
The orchestrator-workers pattern uses a central LLM to dynamically analyze, coordinate, and synthesize responses from specialized workers. This is useful for tasks requiring structured content generation.
graph LR
A[Topic Input] --> B[Orchestrator]
B --> C[Planning Phase]
C --> D[Writing Phase]
D --> E[Review Phase]
style D fill:#f9f,stroke:#333,stroke-width:2px
π Step Breakdown:
π οΈOrchestrator | πPlanning Phase | βοΈWriting Phase | πReview Phase |
---|---|---|---|
π Analyzes theblog topic and requirements | π Breaks content intosections | ποΈ Assigns sections tospecialized writers | β Evaluatescontent flow and cohesion |
ποΈ Generates astructured content plan | π Definesword count and writing style | π Maintainscontext and consistency | β¨ Suggestsimprovements |
π Overseescontent cohesion | - | - | π Produces apolished final version |
This section introduces additional architectures for building AI agents, providing a structured overview of their workflows and modular designs.
CrewAI follows a modular, step-by-step approach that includes:
Example: Finance Agent App
Workflow
graph TD
A[Start] --> B[Initialize Streamlit App]
B --> C[Set Up API Keys & Load Environment Variables]
C --> D[User Inputs Company Name]
D --> E{Start Analysis Button Clicked?}
E -- Yes --> F[Display Progress & Setup Agents]
F --> G[Create Financial Analyst Agent]
G --> H[Create Investment Strategy Reviewer Agent]
H --> I[Initialize Stock Market Scraper Tool]
I --> J[Define Stock Market Analysis Task]
J --> K[Define Investment Review Task]
K --> L[Create Financial Analysis Crew]
L --> M[Run AI Analysis Crew Process]
M --> N[Generate Financial Report]
N --> O[Display Final Financial Analysis Report]
E -- No --> P[Wait for User Action]
style A fill:#ffcc00,stroke:#333,stroke-width:2px
style O fill:#ffcc00,stroke:#333,stroke-width:2px
style M fill:#00ccff,stroke:#333,stroke-width:2px
style J fill:#00ccff,stroke:#333,stroke-width:2px
SmolAgent provides a live coding guide for building lightweight agents. Key highlights include:
- Streamlit App: Running using Colab as the backend server
- AI News Agent App
Workflow
graph TD
A[Start] --> B[Initialize Streamlit App]
B --> C[Initialize LLM and Search Tool]
C --> D[User Inputs News Topic]
D --> E[Set Search Depth and Analysis Type]
E --> F{Analyze News Button Clicked?}
F -- Yes --> G[Perform DuckDuckGo Search]
G --> H{Results Found?}
H -- Yes --> I[Create Analysis Prompt]
I --> J[Generate Analysis Using LLM]
J --> K[Display Analysis Results]
K --> L[Log Activity]
H -- No --> M[Show No Results/Error Message]
F -- No --> N[Wait for User Action]
K --> O[Show Tips for Better Results]
M --> O
O --> P[End]
style A fill:#ffcc00,stroke:#333,stroke-width:2px
style P fill:#ffcc00,stroke:#333,stroke-width:2px
style G fill:#00ccff,stroke:#333,stroke-width:2px
style J fill:#00ccff,stroke:#333,stroke-width:2px
Running live in Colab
graph TD
SA[SmolAgent]
SA --> SC[Step-by-Step Live Coding]
SC --> S[Streamlit App via Colab]
SC --> N[AI News Agent App]
Contributions are welcome! To get started:
- Fork the repository
- Create a branch (
feature-new-pattern
) - Commit your changes
- Push to GitHub and open a PR
For detailed guidelines, check CONTRIBUTING.md.
This project is licensed under the MIT License β see LICENSE for details.
For questions or collaborations, feel free to reach out:
π§ Email: ashishpatel.ce.2011@gmail.com
π― Happy Coding! π