Day 2: Unleash AI Power with Prompt Engineering Secrets Using LangChain

Welcome to Day 2 of our LangChain + AWS Bedrock journey! Today we dive into the art and science of prompt engineering - the skill that transforms simple text into powerful AI interactions.


What We'll Learn Today


Zero-shot prompting: Getting results without examples

Few-shot prompting: Lea...

🔗 https://www.roastdev.com/post/....day-2-unleash-ai-pow

#news #tech #development

Favicon 
www.roastdev.com

Day 2: Unleash AI Power with Prompt Engineering Secrets Using LangChain

Welcome to Day 2 of our LangChain + AWS Bedrock journey! Today we dive into the art and science of prompt engineering - the skill that transforms simple text into powerful AI interactions.


What We'll Learn Today


Zero-shot prompting: Getting results without examples

Few-shot prompting: Learning from examples


Role prompting: Making AI adopt specific personas

Model parameters: Fine-tuning AI behavior (temperature, top_p, max_tokens)



Setup (Continuing from Day 1)
Assuming you have the packages and bedrock client from Day 1, let's initialize our Claude model with specific parameters for today's experiments:
⛶from langchain_aws import ChatBedrock
from langchain.prompts import PromptTemplate

# Initialize Claude with parameters for prompt engineering
llm = ChatBedrock(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
region_name="us-east-1",
model_kwargs={
"max_tokens": 150,
"temperature": 0.7,
"top_p": 0.9
}
)


Understanding Prompt Engineering
Prompt engineering is the art of crafting instructions that guide AI models to produce desired outputs. Think of it like being a director giving instructions to an actor - the clearer and more specific your direction, the better the performance.The key principles are:

Clarity: Be specific about what you want

Context: Provide relevant background information

Constraints: Set boundaries (length, format, tone)

Examples: Show the desired output style when needed



1. Zero-Shot Prompting
Zero-shot prompting is like asking someone to perform a task they've never seen before, relying purely on their general knowledge and understanding. The model uses its pre-trained knowledge without any specific examples.When to use Zero-shot:
Simple, well-defined tasks
When the model already understands the domain
For general knowledge questions
When you want the model's "natural" response
Advantages:
Quick and simple
No need to prepare examples
Works well for common tasks
Limitations:
May not follow specific formats
Less control over output style
Can be inconsistent for complex tasks

⛶# Zero-shot prompt - no examples given
zero_shot_prompt = PromptTemplate(
input_variables=["service"],
template="Explain {service} in simple terms for a beginner."
)

# Use it
prompt = zero_shot_prompt.format(service="Amazon S3")
response = llm.invoke(prompt)
print(response.content)


2. Few-Shot Prompting
Few-shot prompting is like showing someone examples before asking them to do a task. You provide 2-5 examples of the desired input-output pattern, then ask the model to follow the same pattern.When to use Few-shot:
When you need consistent formatting
For complex or unusual tasks
When zero-shot results are inconsistent
To establish a specific style or tone
Advantages:
Better control over output format
More consistent results
Can teach complex patterns
Reduces need for detailed instructions
Best practices:
Use 2-5 examples (more isn't always better)
Make examples diverse but consistent
Show edge cases if relevant
Keep examples concise

⛶# Few-Shot Prompting

few_shot_prompt = PromptTemplate(
input_variables=["service"],
template="""
Explain AWS services using this format:

Example 1:
Service: Amazon EC2
Simple Explanation: Virtual computers in the cloud that you can rent by the hour.

Example 2:
Service: Amazon RDS
Simple Explanation: Managed database service that handles backups and updates automatically.

Now explain:
Service: {service}
Simple Explanation:"""
)

# Use the few-shot prompt
prompt = few_shot_prompt.format(service="Amazon Lambda")
response = llm.invoke(prompt)
print(response.content)


3. Role Prompting
Role prompting assigns a specific identity, profession, or perspective to the AI. It's like asking the model to "act as" someone with particular expertise, personality, or viewpoint.Why Role Prompting works:
Models have learned associations between roles and communication styles
Provides context for appropriate language and knowledge level
Helps generate more engaging and targeted responses
Leverages the model's understanding of different perspectives
Types of roles:

Professional roles: "You are a software architect", "You are a teacher"

Personality traits: "You are enthusiastic", "You are patient and methodical"

Expertise levels: "You are a beginner", "You are an expert"

Creative personas: "You are a poet", "You are a storyteller"
Best practices:
Be specific about the role's characteristics
Include relevant context about the audience
Combine roles with other prompting techniques
Test different roles to find what works best

⛶role_prompt = PromptTemplate(
input_variables=["service", "role"],
template="""
You are a {role}. Explain {service} from your perspective.
Keep it engaging and use language appropriate to your role.
"""
)

# Test different roles
roles = ["friendly teacher", "creative poet", "cricket commentator"]

for role in roles:
print(f"
{role.title()}:")
prompt = role_prompt.format(service="Amazon Lambda", role=role)
response = llm.invoke(prompt)
print(response.content)
print("-" * 40)


Model Parameters
The model_kwargs parameter controls AI behavior:


Key Parameters


max_tokens: Response length (50-150 for short, 200-500 for detailed)

temperature: Creativity level (0.2 = focused, 0.7 = balanced, 0.9 = creative)

top_p: Word diversity (0.8 = focused, 0.9 = balanced)



Quick Examples
⛶# Factual responses
factual_kwargs = {"max_tokens": 150, "temperature": 0.2, "top_p": 0.8}

# Creative responses
creative_kwargs = {"max_tokens": 300, "temperature": 0.9, "top_p": 0.95}


Key Takeaways
Core Prompting Techniques:

Zero-shot: Direct instructions, relies on model knowledge

Few-shot: Provide examples to guide format and style

Role prompting: Adopt personas for engaging explanations
Model Control:

Parameters: Fine-tune behavior with temperature, top_p, max_tokens



Best Practices
Getting Started:

Start Simple: Begin with zero-shot, add complexity as needed

Be Specific: Vague prompts lead to inconsistent results

Test Iteratively: Refine prompts based on outputs
Improving Results:

Use Examples: Show don't just tell what you want

Set Constraints: Guide the model with clear boundaries

Consider Context: Provide relevant background information
Optimization:

Monitor Parameters: Adjust temperature and top_p for your use case

Test Edge Cases: Try unusual inputs to test robustness



Common Pitfalls to Avoid


Over-prompting: Too many instructions can confuse the model

Ambiguous language: Be precise in your requirements


Ignoring context length: Very long prompts may get truncated

Not testing edge cases: Try unusual inputs to test robustness

Fixed parameters: Different tasks need different temperature/top_p values

Inconsistent examples: Make sure few-shot examples follow the same pattern



About Me
Hi! I'm Utkarsh, a Cloud Specialist AWS Community Builder who loves turning complex AWS topics into fun chai-time stories ☕ ? Explore moreThis is part of my "LangChain with AWS Bedrock: A Developer's Journey" series. Follow along as I document everything I learn, including the mistakes and the victories.

Similar Posts

Similar

ScrollX UI: Unleash Stunning Animated Components for Next.js Powerhouses

ScrollX UI: an open-source component library with 60+ animated, customizable components for modern web applications. Built for developers working with Next.js and TypeScript projects.Features:🎨 Complete collection of interactive UI elements with smooth animations🔧 Full source code access with ...

🔗 https://www.roastdev.com/post/....scrollx-ui-unleash-s

#news #tech #development

Favicon 
www.roastdev.com

ScrollX UI: Unleash Stunning Animated Components for Next.js Powerhouses

ScrollX UI: an open-source component library with 60+ animated, customizable components for modern web applications. Built for developers working with Next.js and TypeScript projects.Features:? Complete collection of interactive UI elements with smooth animations? Full source code access with no vendor restrictions♿ Built-in accessibility following WAI-ARIA standards ? Structured for AI-assisted development workflows⚡ Modern tech stack with Tailwind CSS and Framer Motion? Multiple installation options including shadcn/ui CLI integrationPerfect for SaaS dashboards, landing pages, and any project requiring polished user interactions. The composable architecture makes it easy to customize and extend components as your application grows.? Blog Post? GitHub Repo? Browse All Components
Similar

Unlocking AI Trust: How Grad-CAM Reveals Alzheimer’s Prediction Secrets

When we talk about Artificial Intelligence in healthcare, the first thing that comes to mind is usually accuracy. We want the model to predict correctly, whether it’s diagnosing eye diseases, classifying scans, or detecting early signs of Alzheimer’s.But here’s the truth I learned in my projec...

🔗 https://www.roastdev.com/post/....unlocking-ai-trust-h

#news #tech #development

Favicon 
www.roastdev.com

Unlocking AI Trust: How Grad-CAM Reveals Alzheimer’s Prediction Secrets

When we talk about Artificial Intelligence in healthcare, the first thing that comes to mind is usually accuracy. We want the model to predict correctly, whether it’s diagnosing eye diseases, classifying scans, or detecting early signs of Alzheimer’s.But here’s the truth I learned in my project is accuracy alone is not enough.


The Challenge I Faced
In my Alzheimer’s early detection project, the model was performing well on paper. ⛶Laporan Klasifikasi (Classification Report):

precision recall f1-score support

Mild Impairment 0.97 0.99 0.98 179
Moderate Impairment 1.00 0.92 0.96 12
No Impairment 0.99 1.00 0.99 640
Very Mild Impairment 0.99 0.98 0.98 448

accuracy 0.99 1279
macro avg 0.99 0.97 0.98 1279
weighted avg 0.99 0.99 0.99 1279

------------------------------------------------------
Matthew's Correlation Coefficient (MCC): 0.9781
------------------------------------------------------The numbers looked impressive, but there was still one big question:
How can we trust what the AI sees?
Doctors won’t just accept a probability score. Patients and their families won’t feel reassured just by a number. They need to know why the model made that decision.


Discovering Explainability with Grad-CAM
That’s where I explored Grad-CAM (Gradient-weighted Class Activation Mapping).Don’t worry, it’s not as complicated as it sounds. Grad-CAM creates a heatmap that highlights the regions of an image the model focuses on when making a prediction.In other words, it turns the “black box” into something more transparent and human-readable.


Before and After Grad-CAM
In my Alzheimer’s project, the difference was clear:


Before
The model predicted “Mild Demented” with high confidence, but I had no way to explain why.


After Grad-CAM
The heatmap showed exactly which parts of the brain scan the AI considered most important. And more importantly they were the medically relevant regions linked to early Alzheimer’s symptoms.That small shift made a big difference. Suddenly, the model wasn’t just a silent judge giving out labels. It became a tool that doctors could actually discuss, question, and trust.


What I Learned
This project taught me a valuable lesson:
Accuracy is powerful, but explainability is what builds trust.
In sensitive areas like healthcare, trust matters as much as performance.
Tools like Grad-CAM are not just technical tricks, they are bridges between AI researchers and medical professionals.



Final Thoughts
Working on this project reminded me why I got into AI research in the first place: not just to build models, but to build models that people can trust and use.Explainable AI is not optional anymore. It’s the key to making AI truly impactful in real life especially in areas that touch human health.See the full notebook of Alzheimer Detection here: https://www.kaggle.com/code/hafizabdiel/alzheimer-classification-with-swin-efficient-netIf you’re curious about my other AI projects, you can find them here: http://abdielz.tech/
Similar

🚀 Unlocking the Power Set: Master Subsets Pattern for Amazon Interviews (Day 1

The Subsets Pattern is widely used in combinatorial problems where we need to explore all combinations, subsets, or decisions (take/not take).
Amazon often uses this to test recursion + backtracking + BFS/DFS skills.


🔑 When to Use Subsets Pattern?

Generate all subsets / combinations of a...

🔗 https://www.roastdev.com/post/....unlocking-the-power-

#news #tech #development

Favicon 
www.roastdev.com

? Unlocking the Power Set: Master Subsets Pattern for Amazon Interviews (Day 10)

The Subsets Pattern is widely used in combinatorial problems where we need to explore all combinations, subsets, or decisions (take/not take).
Amazon often uses this to test recursion + backtracking + BFS/DFS skills.


? When to Use Subsets Pattern?

Generate all subsets / combinations of a set
Handle decision-based recursion (pick or not pick)
Solve problems with combinatorial explosion (powerset, permutations, combination sums)
Explore feature toggles / inclusion-exclusion




? Problem 1: Generate All Subsets
? Amazon-style phrasing:
Given a set of distinct integers nums, return all possible subsets (the power set).


Java Solution (Backtracking)
⛶import java.util.*;

public class Subsets {
public static ListListInteger subsets(int[] nums) {
ListListInteger result = new ArrayList();
backtrack(nums, 0, new ArrayList(), result);
return result;
}

private static void backtrack(int[] nums, int index, ListInteger current, ListListInteger result) {
result.add(new ArrayList(current)); // add current subset

for (int i = index; i nums.length; i++) {
current.add(nums[i]); // include nums[i]
backtrack(nums, i + 1, current, result);
current.remove(current.size() - 1); // backtrack
}
}

public static void main(String[] args) {
int[] nums = {1, 2, 3};
System.out.println(subsets(nums));
}
}✅ Time Complexity: O(2^n)
✅ Space Complexity: O(n) recursion depth


? Problem 2: Subsets With Duplicates
? Amazon-style phrasing:
Given a collection of integers nums that might contain duplicates, return all possible subsets without duplicates.


Java Solution
⛶public class SubsetsWithDup {
public static ListListInteger subsetsWithDup(int[] nums) {
Arrays.sort(nums); // sort to handle duplicates
ListListInteger result = new ArrayList();
backtrack(nums, 0, new ArrayList(), result);
return result;
}

private static void backtrack(int[] nums, int index, ListInteger current, ListListInteger result) {
result.add(new ArrayList(current));

for (int i = index; i nums.length; i++) {
if (i index nums[i] == nums[i - 1]) continue; // skip duplicates
current.add(nums[i]);
backtrack(nums, i + 1, current, result);
current.remove(current.size() - 1);
}
}
}✅ Amazon Insight:
Tests your ability to handle duplicates gracefully with sorting + skipping.


? Problem 3: Letter Case Permutation
? Amazon-style phrasing:
Given a string s, return all possible strings after toggling case of each letter.


Java Solution
⛶public class LetterCasePermutation {
public static ListString letterCasePermutation(String s) {
ListString result = new ArrayList();
backtrack(s.toCharArray(), 0, new StringBuilder(), result);
return result;
}

private static void backtrack(char[] chars, int index, StringBuilder current, ListString result) {
if (index == chars.length) {
result.add(current.toString());
return;
}

char c = chars[index];
if (Character.isLetter(c)) {
current.append(Character.toLowerCase(c));
backtrack(chars, index + 1, current, result);
current.deleteCharAt(current.length() - 1);

current.append(Character.toUpperCase(c));
backtrack(chars, index + 1, current, result);
current.deleteCharAt(current.length() - 1);
} else {
current.append(c);
backtrack(chars, index + 1, current, result);
current.deleteCharAt(current.length() - 1);
}
}
}✅ Amazon Insight:
Tests creativity — it’s still a subsets problem but disguised with characters instead of numbers.


? Problem 4: Generate Balanced Parentheses
? Amazon-style phrasing:
Given n pairs of parentheses, write a function to generate all combinations of well-formed parentheses.


Java Solution
⛶public class GenerateParentheses {
public static ListString generateParenthesis(int n) {
ListString result = new ArrayList();
backtrack(result, "", 0, 0, n);
return result;
}

private static void backtrack(ListString result, String current, int open, int close, int max) {
if (current.length() == max * 2) {
result.add(current);
return;
}

if (open max) backtrack(result, current + "(", open + 1, close, max);
if (close open) backtrack(result, current + ")", open, close + 1, max);
}
}✅ Amazon Insight:
Tests recursion depth + constraints (open ≤ close).


? Extended Problem List (Amazon Patterns)

Combination Sum (LeetCode 39)

Combination Sum II (LeetCode 40) – with duplicates
Permutations (LeetCode 46)

Permutations II (LeetCode 47) – with duplicates
Word Search II (Backtracking in Grid)
Sudoku Solver (Hard Backtracking)



? Key Takeaways

Subsets pattern = decision making (take / skip).
Natural recursion fits these problems well.
Amazon loves duplicates handling (sorting + skipping).
Expect parentheses, toggling, or string variants.
? Next in the series (Day 11):
? Modified Binary Search Pattern – super popular in Amazon interviews for rotated arrays, searching in infinite arrays, and tricky conditions.