Day 1 Unlocked: Diving into LangChain with Claude and Titan on AWS Bedrock

Hey there! Welcome to my journey of learning LangChain with AWS Bedrock. I'm documenting everything as I go, so you can learn alongside me. Today was my first day diving into this fascinating world of AI models, and honestly, it felt like having a conversation with the future.Quick Setup Note: I'm u...

🔗 https://www.roastdev.com/post/....day-1-unlocked-divin

#news #tech #development

Favicon 
www.roastdev.com

Day 1 Unlocked: Diving into LangChain with Claude and Titan on AWS Bedrock

Hey there! Welcome to my journey of learning LangChain with AWS Bedrock. I'm documenting everything as I go, so you can learn alongside me. Today was my first day diving into this fascinating world of AI models, and honestly, it felt like having a conversation with the future.Quick Setup Note: I'm using AWS SageMaker Studio notebooks for this entire series - it comes with all AWS permissions pre-configured and makes the learning process super smooth. Just create a notebook and you're ready to go!


What is LangChain and Why Use It?
LangChain is a Python framework that makes working with Large Language Models (LLMs) incredibly simple. Instead of writing complex API calls and handling raw JSON responses, LangChain provides a clean, intuitive interface.Why LangChain?

Simplicity: One line of code instead of 20+ lines of API handling

Consistency: Same interface for different AI models (Claude, GPT, Titan, etc.)

Power: Built-in features like memory, chains, and prompt templates

Flexibility: Easy to switch between models or combine multiple AI calls
Think of LangChain as a bridge between your Python code and powerful AI models. Instead of dealing with complex API calls and JSON responses, LangChain makes it feel like you're just chatting with a really smart friend who happens to live in the cloud.


Setting Up Our Playground
First things first - let's get our tools ready. It's like preparing chai before a good conversation:
⛶!pip install boto3==1.39.13 botocore==1.39.13 langchain==0.3.27 langchain-aws==0.2.31

import boto3
from langchain_aws import ChatBedrock

# Initialize Bedrock client
bedrock_client = boto3.client(
service_name="bedrock-runtime",
region_name="us-east-1"
)This is our foundation. The bedrock_client is like getting a VIP pass to AWS's AI models. Simple, right?


Meeting Claude - The Thoughtful AI
Claude is like that friend who always gives thoughtful, well-structured answers. Let's set him up:
⛶# Create a LangChain ChatBedrock
llm = ChatBedrock(
client=bedrock_client,
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"max_tokens": 256, "temperature": 0.7}
)

response = llm.invoke("Write a short poem about AWS In Human Feel Based on Indian Desi Version")
print("Claude Response:
", response.content)The magic happens in that invoke() call. It's like asking a question and getting back a thoughtful response. The temperature: 0.7 makes Claude a bit creative - not too robotic, not too wild.


Meeting Titan - The Quick Responder
Now, let's try Amazon's own Titan model. But here's where I learned something important the hard way:
⛶# Try with Titan model (shorter completion)
titan_llm = ChatBedrock(
client=bedrock_client,
model_id="amazon.titan-text-lite-v1",
model_kwargs={"maxTokenCount": 128, "temperature": 0.5}
)

prompt = """You are a creative Indian poet with a friendly desi vibe. Write a short poem (4 lines max) about AWS cloud services.
Use simple human feelings and desi cultural touches (like chai, monsoon, Bollywood style). Keep the tone warm, positive, and
free of any bad or offensive words.
"""
response = titan_llm.invoke(prompt)
print("Titan Response:
", response.content)


The Gotchas I Discovered



1. Model Names Matter
I initially used amazon.titan-text-lite-v1, but for chat interactions, amazon.titan-text-express-v1 works better. It's like calling someone by the right name - details matter!


2. Parameter Confusion: maxTokenCount vs max_tokens
This one got me! Different models expect different parameter names:

Claude models: Use max_tokens


Some Titan models: Might expect maxTokenCount in certain contexts

LangChain standard: Generally uses max_tokens

Think of it like this - it's the same concept (limiting response length), but different models speak slightly different dialects. Always check the documentation!


3. Using the Right Model Instance
I made a silly mistake - created titan_llm but then used llm for the Titan response. It's like preparing two different teas but serving the wrong one to your guest!


What I Learned Today


LangChain simplifies everything - No more wrestling with raw API responses

Each model has personality - Claude is thoughtful, Titan is quick

Parameter names vary - Always double-check the docs

Temperature controls creativity - Lower = more focused, Higher = more creative

Model IDs are specific - Use the right one for your use case



Wrapping Up
Day 1 was all about getting comfortable with the basics. Like learning to ride a bike, the first day is about balance and not falling off. Each day we'll be discovering new concepts through hands-on experimentation!The beauty of LangChain is that it makes powerful AI feel approachable. You don't need a PhD in machine learning - just curiosity and willingness to experiment.Happy coding! If you found this helpful, leave a comment and follow this whole series as we explore more LangChain magic together.


About Me
Hi! I'm Utkarsh, a Cloud Specialist AWS Community Builder who loves turning complex AWS topics into fun chai-time stories ☕ ? Explore moreThis is part of my "LangChain with AWS Bedrock: A Developer's Journey" series. Follow along as I document everything I learn, including the mistakes and the victories.

Similar Posts

Similar

Day 2: Unleash AI Power with Prompt Engineering Secrets Using LangChain

Welcome to Day 2 of our LangChain + AWS Bedrock journey! Today we dive into the art and science of prompt engineering - the skill that transforms simple text into powerful AI interactions.


What We'll Learn Today


Zero-shot prompting: Getting results without examples

Few-shot prompting: Lea...

🔗 https://www.roastdev.com/post/....day-2-unleash-ai-pow

#news #tech #development

Favicon 
www.roastdev.com

Day 2: Unleash AI Power with Prompt Engineering Secrets Using LangChain

Welcome to Day 2 of our LangChain + AWS Bedrock journey! Today we dive into the art and science of prompt engineering - the skill that transforms simple text into powerful AI interactions.


What We'll Learn Today


Zero-shot prompting: Getting results without examples

Few-shot prompting: Learning from examples


Role prompting: Making AI adopt specific personas

Model parameters: Fine-tuning AI behavior (temperature, top_p, max_tokens)



Setup (Continuing from Day 1)
Assuming you have the packages and bedrock client from Day 1, let's initialize our Claude model with specific parameters for today's experiments:
⛶from langchain_aws import ChatBedrock
from langchain.prompts import PromptTemplate

# Initialize Claude with parameters for prompt engineering
llm = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
region_name="us-east-1",
model_kwargs={
"max_tokens": 150,
"temperature": 0.7,
"top_p": 0.9
}
)


Understanding Prompt Engineering
Prompt engineering is the art of crafting instructions that guide AI models to produce desired outputs. Think of it like being a director giving instructions to an actor - the clearer and more specific your direction, the better the performance.The key principles are:

Clarity: Be specific about what you want

Context: Provide relevant background information

Constraints: Set boundaries (length, format, tone)

Examples: Show the desired output style when needed



1. Zero-Shot Prompting
Zero-shot prompting is like asking someone to perform a task they've never seen before, relying purely on their general knowledge and understanding. The model uses its pre-trained knowledge without any specific examples.When to use Zero-shot:
Simple, well-defined tasks
When the model already understands the domain
For general knowledge questions
When you want the model's "natural" response
Advantages:
Quick and simple
No need to prepare examples
Works well for common tasks
Limitations:
May not follow specific formats
Less control over output style
Can be inconsistent for complex tasks

⛶# Zero-shot prompt - no examples given
zero_shot_prompt = PromptTemplate(
input_variables=["service"],
template="Explain {service} in simple terms for a beginner."
)

# Use it
prompt = zero_shot_prompt.format(service="Amazon S3")
response = llm.invoke(prompt)
print(response.content)


2. Few-Shot Prompting
Few-shot prompting is like showing someone examples before asking them to do a task. You provide 2-5 examples of the desired input-output pattern, then ask the model to follow the same pattern.When to use Few-shot:
When you need consistent formatting
For complex or unusual tasks
When zero-shot results are inconsistent
To establish a specific style or tone
Advantages:
Better control over output format
More consistent results
Can teach complex patterns
Reduces need for detailed instructions
Best practices:
Use 2-5 examples (more isn't always better)
Make examples diverse but consistent
Show edge cases if relevant
Keep examples concise

⛶# Few-Shot Prompting

few_shot_prompt = PromptTemplate(
input_variables=["service"],
template="""
Explain AWS services using this format:

Example 1:
Service: Amazon EC2
Simple Explanation: Virtual computers in the cloud that you can rent by the hour.

Example 2:
Service: Amazon RDS
Simple Explanation: Managed database service that handles backups and updates automatically.

Now explain:
Service: {service}
Simple Explanation:"""
)

# Use the few-shot prompt
prompt = few_shot_prompt.format(service="Amazon Lambda")
response = llm.invoke(prompt)
print(response.content)


3. Role Prompting
Role prompting assigns a specific identity, profession, or perspective to the AI. It's like asking the model to "act as" someone with particular expertise, personality, or viewpoint.Why Role Prompting works:
Models have learned associations between roles and communication styles
Provides context for appropriate language and knowledge level
Helps generate more engaging and targeted responses
Leverages the model's understanding of different perspectives
Types of roles:

Professional roles: "You are a software architect", "You are a teacher"

Personality traits: "You are enthusiastic", "You are patient and methodical"

Expertise levels: "You are a beginner", "You are an expert"

Creative personas: "You are a poet", "You are a storyteller"
Best practices:
Be specific about the role's characteristics
Include relevant context about the audience
Combine roles with other prompting techniques
Test different roles to find what works best

⛶role_prompt = PromptTemplate(
input_variables=["service", "role"],
template="""
You are a {role}. Explain {service} from your perspective.
Keep it engaging and use language appropriate to your role.
"""
)

# Test different roles
roles = ["friendly teacher", "creative poet", "cricket commentator"]

for role in roles:
print(f"
{role.title()}:")
prompt = role_prompt.format(service="Amazon Lambda", role=role)
response = llm.invoke(prompt)
print(response.content)
print("-" * 40)


Model Parameters
The model_kwargs parameter controls AI behavior:


Key Parameters


max_tokens: Response length (50-150 for short, 200-500 for detailed)

temperature: Creativity level (0.2 = focused, 0.7 = balanced, 0.9 = creative)

top_p: Word diversity (0.8 = focused, 0.9 = balanced)



Quick Examples
⛶# Factual responses
factual_kwargs = {"max_tokens": 150, "temperature": 0.2, "top_p": 0.8}

# Creative responses
creative_kwargs = {"max_tokens": 300, "temperature": 0.9, "top_p": 0.95}


Key Takeaways
Core Prompting Techniques:

Zero-shot: Direct instructions, relies on model knowledge

Few-shot: Provide examples to guide format and style

Role prompting: Adopt personas for engaging explanations
Model Control:

Parameters: Fine-tune behavior with temperature, top_p, max_tokens



Best Practices
Getting Started:

Start Simple: Begin with zero-shot, add complexity as needed

Be Specific: Vague prompts lead to inconsistent results

Test Iteratively: Refine prompts based on outputs
Improving Results:

Use Examples: Show don't just tell what you want

Set Constraints: Guide the model with clear boundaries

Consider Context: Provide relevant background information
Optimization:

Monitor Parameters: Adjust temperature and top_p for your use case

Test Edge Cases: Try unusual inputs to test robustness



Common Pitfalls to Avoid


Over-prompting: Too many instructions can confuse the model

Ambiguous language: Be precise in your requirements


Ignoring context length: Very long prompts may get truncated

Not testing edge cases: Try unusual inputs to test robustness

Fixed parameters: Different tasks need different temperature/top_p values

Inconsistent examples: Make sure few-shot examples follow the same pattern



About Me
Hi! I'm Utkarsh, a Cloud Specialist AWS Community Builder who loves turning complex AWS topics into fun chai-time stories ☕ ? Explore moreThis is part of my "LangChain with AWS Bedrock: A Developer's Journey" series. Follow along as I document everything I learn, including the mistakes and the victories.
Similar

ScrollX UI: Unleash Stunning Animated Components for Next.js Powerhouses

ScrollX UI: an open-source component library with 60+ animated, customizable components for modern web applications. Built for developers working with Next.js and TypeScript projects.Features:🎨 Complete collection of interactive UI elements with smooth animations🔧 Full source code access with ...

🔗 https://www.roastdev.com/post/....scrollx-ui-unleash-s

#news #tech #development

Favicon 
www.roastdev.com

ScrollX UI: Unleash Stunning Animated Components for Next.js Powerhouses

ScrollX UI: an open-source component library with 60+ animated, customizable components for modern web applications. Built for developers working with Next.js and TypeScript projects.Features:? Complete collection of interactive UI elements with smooth animations? Full source code access with no vendor restrictions♿ Built-in accessibility following WAI-ARIA standards ? Structured for AI-assisted development workflows⚡ Modern tech stack with Tailwind CSS and Framer Motion? Multiple installation options including shadcn/ui CLI integrationPerfect for SaaS dashboards, landing pages, and any project requiring polished user interactions. The composable architecture makes it easy to customize and extend components as your application grows.? Blog Post? GitHub Repo? Browse All Components
Similar

Unlocking AI Trust: How Grad-CAM Reveals Alzheimer’s Prediction Secrets

When we talk about Artificial Intelligence in healthcare, the first thing that comes to mind is usually accuracy. We want the model to predict correctly, whether it’s diagnosing eye diseases, classifying scans, or detecting early signs of Alzheimer’s.But here’s the truth I learned in my projec...

🔗 https://www.roastdev.com/post/....unlocking-ai-trust-h

#news #tech #development

Favicon 
www.roastdev.com

Unlocking AI Trust: How Grad-CAM Reveals Alzheimer’s Prediction Secrets

When we talk about Artificial Intelligence in healthcare, the first thing that comes to mind is usually accuracy. We want the model to predict correctly, whether it’s diagnosing eye diseases, classifying scans, or detecting early signs of Alzheimer’s.But here’s the truth I learned in my project is accuracy alone is not enough.


The Challenge I Faced
In my Alzheimer’s early detection project, the model was performing well on paper. ⛶Laporan Klasifikasi (Classification Report):

precision recall f1-score support

Mild Impairment 0.97 0.99 0.98 179
Moderate Impairment 1.00 0.92 0.96 12
No Impairment 0.99 1.00 0.99 640
Very Mild Impairment 0.99 0.98 0.98 448

accuracy 0.99 1279
macro avg 0.99 0.97 0.98 1279
weighted avg 0.99 0.99 0.99 1279

------------------------------------------------------
Matthew's Correlation Coefficient (MCC): 0.9781
------------------------------------------------------The numbers looked impressive, but there was still one big question:
How can we trust what the AI sees?
Doctors won’t just accept a probability score. Patients and their families won’t feel reassured just by a number. They need to know why the model made that decision.


Discovering Explainability with Grad-CAM
That’s where I explored Grad-CAM (Gradient-weighted Class Activation Mapping).Don’t worry, it’s not as complicated as it sounds. Grad-CAM creates a heatmap that highlights the regions of an image the model focuses on when making a prediction.In other words, it turns the “black box” into something more transparent and human-readable.


Before and After Grad-CAM
In my Alzheimer’s project, the difference was clear:


Before
The model predicted “Mild Demented” with high confidence, but I had no way to explain why.


After Grad-CAM
The heatmap showed exactly which parts of the brain scan the AI considered most important. And more importantly they were the medically relevant regions linked to early Alzheimer’s symptoms.That small shift made a big difference. Suddenly, the model wasn’t just a silent judge giving out labels. It became a tool that doctors could actually discuss, question, and trust.


What I Learned
This project taught me a valuable lesson:
Accuracy is powerful, but explainability is what builds trust.
In sensitive areas like healthcare, trust matters as much as performance.
Tools like Grad-CAM are not just technical tricks, they are bridges between AI researchers and medical professionals.



Final Thoughts
Working on this project reminded me why I got into AI research in the first place: not just to build models, but to build models that people can trust and use.Explainable AI is not optional anymore. It’s the key to making AI truly impactful in real life especially in areas that touch human health.See the full notebook of Alzheimer Detection here: https://www.kaggle.com/code/hafizabdiel/alzheimer-classification-with-swin-efficient-netIf you’re curious about my other AI projects, you can find them here: http://abdielz.tech/