<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Building in the Cloud - Stories from a cloud architect]]></title><description><![CDATA[What does it mean to build applications in the cloud? With 10+ years of experience in several roles, I have a lot to talk about and love to share my experiences based on real life situations.]]></description><link>https://cremich.cloud</link><generator>RSS for Node</generator><lastBuildDate>Sat, 18 Apr 2026 10:41:37 GMT</lastBuildDate><atom:link href="https://cremich.cloud/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How Amazon Q Developer CLI Manages Context Files]]></title><description><![CDATA[Ever wondered how Amazon Q CLI seamlessly incorporates your project files into AI conversations? Let us decode what’s happening behind the scenes of the /context command.
The Context Management System of Q CLI
The Q CLI application is shipped with an...]]></description><link>https://cremich.cloud/how-amazon-q-developer-cli-manages-context-files</link><guid isPermaLink="true">https://cremich.cloud/how-amazon-q-developer-cli-manages-context-files</guid><category><![CDATA[amazon Q developer CLI ]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Tue, 30 Sep 2025 09:52:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1759225894922/b0383d65-d5c2-4264-96fd-15b05f3c43fc.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ever wondered how Amazon Q CLI seamlessly incorporates your project files into AI conversations? Let us decode what’s happening behind the scenes of the <code>/context</code> command.</p>
<h2 id="heading-the-context-management-system-of-q-cli">The Context Management System of Q CLI</h2>
<p>The Q CLI application is shipped with an internal context management system built around several key components. Files are tracked and distinguished between:</p>
<ul>
<li><p>🤖 Agent paths: Files defined in agent configuration (persistent)</p>
</li>
<li><p>📁 Session paths: Files added via /context add (temporary)</p>
</li>
</ul>
<p>Agent-defined files are permanent, while session files are temporary. For agents, they are persistent because they're defined in the agent configuration file, not because of any special caching mechanism. Session context changes don't persist between chat sessions. Files are read fresh for each request (no caching), respecting filesystem permissions.</p>
<h2 id="heading-from-command-to-context">From command to context</h2>
<p>When you use the "/context add" command, Q CLI:</p>
<ul>
<li><p>Validates paths exist (unless --force is used)</p>
</li>
<li><p>Expands glob patterns (e.g., .py, src/**/.js)</p>
</li>
<li><p>Adds paths as session entries</p>
</li>
</ul>
<p>The most important part - how files become part of your conversation. Your files are surrounded by special CONTEXT_ENTRY_START_HEADER and CONTEXT_ENTRY_END_HEADER. Your context files are integrated as special-formatted content within the conversation context that gets sent to the AI model:</p>
<pre><code class="lang-markdown">--- CONTEXT ENTRY BEGIN ---
[src/main.py] 
def main(): 
print("Hello, World!")

[src/utils.py] 
def helper<span class="hljs-emphasis">_function(): 
return "utility" 
--- CONTEXT ENTRY END ---</span>
</code></pre>
<p>The implementation can be found in the <code>context_messages</code> method within <a target="_blank" href="https://github.com/aws/amazon-q-developer-cli/blob/main/crates/chat-cli/src/cli/chat/conversation.rs#L799"><code>conversation.rs</code></a>:</p>
<pre><code class="lang-rust"><span class="hljs-keyword">async</span> <span class="hljs-function"><span class="hljs-keyword">fn</span> <span class="hljs-title">context_messages</span></span>(
        &amp;<span class="hljs-keyword">mut</span> <span class="hljs-keyword">self</span>,
        os: &amp;Os,
        additional_context: <span class="hljs-built_in">Option</span>&lt;<span class="hljs-built_in">String</span>&gt;,
    ) -&gt; (<span class="hljs-built_in">Option</span>&lt;<span class="hljs-built_in">Vec</span>&lt;HistoryEntry&gt;&gt;, <span class="hljs-built_in">Vec</span>&lt;(<span class="hljs-built_in">String</span>, <span class="hljs-built_in">String</span>)&gt;) {
        <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut</span> context_content = <span class="hljs-built_in">String</span>::new();
        <span class="hljs-keyword">let</span> <span class="hljs-keyword">mut</span> dropped_context_files = <span class="hljs-built_in">Vec</span>::new();
        <span class="hljs-keyword">if</span> <span class="hljs-keyword">let</span> <span class="hljs-literal">Some</span>((summary, _)) = &amp;<span class="hljs-keyword">self</span>.latest_summary {
            context_content.push_str(CONTEXT_ENTRY_START_HEADER);
            context_content.push_str(<span class="hljs-string">"This summary contains ALL relevant information from our previous conversation including tool uses, results, code analysis, and file operations. YOU MUST reference this information when answering questions and explicitly acknowledge specific details from the summary when they're relevant to the current question.\n\n"</span>);
            context_content.push_str(<span class="hljs-string">"SUMMARY CONTENT:\n"</span>);
            context_content.push_str(summary);
            context_content.push(<span class="hljs-string">'\n'</span>);
            context_content.push_str(CONTEXT_ENTRY_END_HEADER);
        }

        <span class="hljs-comment">// Add context files if available</span>
        <span class="hljs-keyword">if</span> <span class="hljs-keyword">let</span> <span class="hljs-literal">Some</span>(context_manager) = <span class="hljs-keyword">self</span>.context_manager.as_mut() {
            <span class="hljs-keyword">match</span> context_manager.collect_context_files_with_limit(os).<span class="hljs-keyword">await</span> {
                <span class="hljs-literal">Ok</span>((files_to_use, files_dropped)) =&gt; {
                    <span class="hljs-keyword">if</span> !files_dropped.is_empty() {
                        dropped_context_files.extend(files_dropped);
                    }

                    <span class="hljs-keyword">if</span> !files_to_use.is_empty() {
                        context_content.push_str(CONTEXT_ENTRY_START_HEADER);
                        <span class="hljs-keyword">for</span> (filename, content) <span class="hljs-keyword">in</span> files_to_use {
                            context_content.push_str(&amp;<span class="hljs-built_in">format!</span>(<span class="hljs-string">"[{}]\n{}\n"</span>, filename, content));
                        }
                        context_content.push_str(CONTEXT_ENTRY_END_HEADER);
                    }
                },
                <span class="hljs-literal">Err</span>(e) =&gt; {
                    warn!(<span class="hljs-string">"Failed to get context files: {}"</span>, e);
                },
            }
        }

        <span class="hljs-keyword">if</span> <span class="hljs-keyword">let</span> <span class="hljs-literal">Some</span>(context) = additional_context {
            context_content.push_str(&amp;context);
        }

        <span class="hljs-keyword">if</span> <span class="hljs-keyword">let</span> <span class="hljs-literal">Some</span>(agent_prompt) = <span class="hljs-keyword">self</span>.agents.get_active().and_then(|a| a.prompt.as_ref()) {
            context_content.push_str(&amp;<span class="hljs-built_in">format!</span>(<span class="hljs-string">"Follow this instruction: {}"</span>, agent_prompt));
        }

        <span class="hljs-keyword">if</span> !context_content.is_empty() {
            <span class="hljs-keyword">self</span>.context_message_length = <span class="hljs-literal">Some</span>(context_content.len());
            <span class="hljs-keyword">let</span> user = UserMessage::new_prompt(context_content, <span class="hljs-literal">None</span>);
            <span class="hljs-keyword">let</span> assistant = AssistantMessage::new_response(<span class="hljs-literal">None</span>, <span class="hljs-string">"I will fully incorporate this information when generating my responses, and explicitly acknowledge relevant parts of the summary when answering questions."</span>.into());
            (
                <span class="hljs-literal">Some</span>(<span class="hljs-built_in">vec!</span>[HistoryEntry {
                    user,
                    assistant,
                    request_metadata: <span class="hljs-literal">None</span>,
                }]),
                dropped_context_files,
            )
        } <span class="hljs-keyword">else</span> {
            (<span class="hljs-literal">None</span>, dropped_context_files)
        }
    }
</code></pre>
<p>Context files are sorted alphabetically and deduplicated by filename. Hooks can also contribute to context content with the same header format. The Q CLI application further implements a token management for your context files:</p>
<ul>
<li><p>It uses 75% of the model's context window for files</p>
</li>
<li><p>Automatically drops the largest files when limits are exceeded</p>
</li>
<li><p>Provides warnings when files are dropped</p>
</li>
</ul>
<p>Some further key implementation details worth knowing for managing your context:</p>
<p>The next time you use <code>/context add</code>, you will know there is a system working behind the scenes to make your files seamlessly available to the underlying AI model - all while keeping your conversation flowing smoothly.</p>
]]></content:encoded></item><item><title><![CDATA[Quo vadis Spec-Driven-Development]]></title><description><![CDATA[Introduction
Ever since I embraced AI-assisted software engineering, there's been one constant shaping my workflows, thinking, and experiments: the adoption of human interaction patterns when working with AI assistants. My conversation-first approach...]]></description><link>https://cremich.cloud/quo-vadis-spec-driven-development</link><guid isPermaLink="true">https://cremich.cloud/quo-vadis-spec-driven-development</guid><category><![CDATA[Amazon Q Developer]]></category><category><![CDATA[Spec-Driven-Development]]></category><category><![CDATA[AI Coding Assistant]]></category><category><![CDATA[vibe coding]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Tue, 16 Sep 2025 09:45:18 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1758012983548/a6e5211f-3f04-4341-8436-ac9fd84e4871.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Ever since I embraced AI-assisted software engineering, there's been one constant shaping my workflows, thinking, and experiments: the adoption of human interaction patterns when working with AI assistants. My conversation-first approach - as I've come to phrase it - dramatically helped me both embody Amazon Q Developer in my workflows while improving my efficiency. This became central to getting better at context management and prompt engineering, all surrounded by a phase of AI-assisted adoption that was mostly dominated by vibe-coding approaches.</p>
<p>My paradigm of adopting AI in software engineering by imitating human collaboration isn't bound to just that realm. This is my personal best practice for adopting AI in many domains.</p>
<p>Take a recent use case I worked on with a team at Bundesliga about using AI to assist in copywriting marketing campaigns. The initial attempt by the team was just arguing that we needed to pick the latest frontier model and hope for the best. My learnings draw a different picture. The differentiator isn't the model itself, but your ability to provide the right context, the right data, and the right instructions. A mix of context management, data foundations, and prompt engineering.</p>
<p>So I asked: What do human copywriters need to create a world-class tagline for a marketing campaign? The answers to that question are exactly what you need to provide to the AI to get a similar world-class experience.</p>
<p>And this is exactly what works with software engineering. What do teams need to build comprehensive and detailed specs to reduce time to market?</p>
<h2 id="heading-the-evolution-from-vibe-coding">The Evolution from Vibe Coding</h2>
<p>With spec-driven development, we're making the next evolution in AI-assisted software engineering. The following quote is from <a target="_blank" href="https://github.com/github/spec-kit">GitHubs Spec-Kit</a> and from my perspective on point:</p>
<blockquote>
<p>Spec-Driven Development <strong>flips the script</strong> on traditional software development. For decades, code has been king — specifications were just scaffolding we built and discarded once the "real work" of coding began. Spec-Driven Development changes this: <strong>specifications become executable</strong>, directly generating working implementations rather than just guiding them.</p>
</blockquote>
<p>But here's what's been nagging at me: my current workflow doesn't feel "realistic" if I take human collaboration as the baseline. My current specification-driven development workflow is a three-phase process where I give input to a single agent to create detailed technical specifications, break them down into implementation plans, and execute step-by-step with comprehensive context management through what I call "Project Intelligence." <a target="_blank" href="https://cremich.cloud/specification-driven-development-with-amazon-q-developer">I've written extensively about this approach</a>, and it has been transformative for me.</p>
<p>However, I see a problem we might be running into.</p>
<h2 id="heading-the-single-perspective-trap">The Single-Perspective Trap</h2>
<p>How were specs created in our human world before AI entered the field? Multiple people with different roles collaborate: product managers bring market understanding, architects contribute technical constraints, UX designers add interaction patterns, security engineers raise compliance factors, DevOps engineers highlight operational considerations, and QA engineers anticipate edge cases. Each perspective makes the specification more comprehensive and implementable.</p>
<p>How are specs created with AI-assisted software engineering today? One person enters input to one agent.</p>
<p>For me, this feels fundamentally incomplete, but I didn't fully understand why until I started adopting the spec-driven approach. Despite detailed specification conversations with Q Developer, comprehensive Project Intelligence context, and structured prompt frameworks, I kept discovering critical requirements during implementation that felt obvious in hindsight:</p>
<ul>
<li><p>Security implications I hadn't considered</p>
</li>
<li><p>UX patterns that conflicted with existing flows</p>
</li>
<li><p>Operational complexities that would emerge at scale</p>
</li>
<li><p>Testing scenarios that weren't obvious from business requirements</p>
</li>
</ul>
<p>These weren't process failures—they were natural blind spots that emerge when any single perspective tries to capture the full complexity of a software system.</p>
<p>The collaborative approach I was craving is also constrained by current technical limitations. Most AI development tools are still designed around single-agent interactions. While custom agents offer specialized knowledge bases, they operate independently without a native capability for agents to build upon each other's insights or challenge assumptions.</p>
<p>Interestingly, <a target="_blank" href="https://docs.anthropic.com/en/docs/claude-code/sub-agents#automatic-delegation">Claude Code represents progress</a> toward orchestration approaches that could simulate collaborative team dynamics. But this technical gap between valuable multi-agent collaboration and what's available in mainstream tools creates a forcing function that keeps us in suboptimal workflows, regardless of how sophisticated our prompting techniques become.</p>
<h2 id="heading-where-we-need-to-go">Where We Need to Go</h2>
<p>The future of spec-driven development isn't about better prompts or more sophisticated single-agent workflows. It's about recreating the collaborative dynamic that makes human specification creation so effective. I see three potential paths forward:</p>
<h3 id="heading-path-1-upfront-human-collaboration-this-is-on-us-engineers">Path 1: Upfront Human Collaboration (this is on us engineers)</h3>
<p>Before engaging with AI, assemble cross-functional input. Instead of one person having a specification conversation with Q Developer, have multiple stakeholders contribute their perspective upfront:</p>
<ul>
<li><p>Product requirements from the PM</p>
</li>
<li><p>Technical constraints from the architect</p>
</li>
<li><p>User experience considerations from design</p>
</li>
<li><p>Security and compliance requirements from security engineering</p>
</li>
<li><p>Operational requirements from DevOps</p>
</li>
</ul>
<p>Then use this rich, multi-perspective input to drive the AI specification conversation.</p>
<h3 id="heading-path-2-multi-agent-collaboration-this-is-on-the-tool-vendors">Path 2: Multi-Agent Collaboration (this is on the tool vendors)</h3>
<p>Leverage specialized AI agents that can simulate different roles and perspectives:</p>
<ul>
<li><p>A "Product Agent" that focuses on user needs and business value</p>
</li>
<li><p>A "Security Agent" that raises compliance and risk considerations</p>
</li>
<li><p>An "Architecture Agent" that considers technical design and scalability</p>
</li>
<li><p>A "UX Agent" that thinks about user workflows and interaction patterns</p>
</li>
</ul>
<p>These agents challenge assumptions and surface considerations that a single agent might miss. Most importantly: those agents can work together!</p>
<h3 id="heading-path-3-hybrid-collaborative-workflows-this-is-on-us-engineers">Path 3: Hybrid Collaborative Workflows (this is on us engineers)</h3>
<p>Combine human expertise with AI capabilities in iterative cycles. Start with human stakeholder input, let AI agents extend and challenge those perspectives, then bring humans back in to validate and refine.</p>
<h2 id="heading-the-bottom-line">The Bottom Line</h2>
<p>Spec-Driven Development has been a massive improvement over vibe coding, bringing rigor, context, and systematic thinking back to AI-assisted software engineering. But if we're honest about how human engineering teams create specifications - through rich, multi-perspective collaboration - then our current single-person-to-single-agent workflows are still leaving significant value on the table.</p>
<p>I believe the next evolution of AI-assisted software engineering will be fundamentally collaborative. Not just human-to-AI collaboration, but multi-perspective collaboration that mirrors how high-performing human teams actually work. Instead of considering AI assistants as individual tools, we should start thinking about them as participants in collaborative engineering processes.</p>
<p>The question isn't whether we should abandon spec-driven development - it's whether we should evolve it to be more collaborative, more representative of how actual engineering teams work, and more aligned with the patterns that make human specification creation so effective.</p>
<p>What do you think? Are we heading in the right direction, or do we need a more fundamental shift toward collaborative specification creation?</p>
<hr />
<p><em>Want to dive deeper into spec-driven development approaches? Check out</em> <a target="_blank" href="https://promptz.dev/"><em>promptz.dev</em></a> <em>for a collection of prompts and patterns for AI-assisted software engineering. I'd love to hear about your experiences with collaborative specification creation -</em> <a target="_blank" href="https://www.linkedin.com/in/christian-bonzelet/"><em>reach out</em></a> <em>and let's continue this conversation.</em></p>
]]></content:encoded></item><item><title><![CDATA[Specification-Driven Development with Amazon Q Developer]]></title><description><![CDATA[Introduction
I used to be that developer. You know the one—throwing half-baked prompts at AI assistants like "revactor this!1!" with spelling errors and hoping for magic. The output often looked impressive, felt like magic, but it wasn't engineering....]]></description><link>https://cremich.cloud/specification-driven-development-with-amazon-q-developer</link><guid isPermaLink="true">https://cremich.cloud/specification-driven-development-with-amazon-q-developer</guid><category><![CDATA[amazon Q developer CLI ]]></category><category><![CDATA[Amazon Q developers]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Fri, 13 Jun 2025 10:25:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1749811895142/bc95bbb1-fc59-4a73-8c21-d0a42283bc84.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>I used to be that developer. You know the one—throwing half-baked prompts at AI assistants like "revactor this!1!" with spelling errors and hoping for magic. The output often looked impressive, felt like magic, but it wasn't engineering. I was just riding the wave of AI-generated coding without structure, context, or intention.</p>
<p>How can we get better?</p>
<h2 id="heading-the-problem-with-vibe-coding">The Problem with Vibe Coding</h2>
<p>Watching AI generate code feels incredible. But after the initial euphoria wears off, you're left with a fundamental problem: you've outsourced thinking to a machine that doesn't understand your business context, architectural decisions, or long-term goals. You get code, but you don't get engineering.</p>
<p>AI assistants like Amazon Q Developer aren't just code generators—they're software engineering partners. But like any partnership, the quality of collaboration depends on how well you communicate, share context, and structure your work together.</p>
<p>Goodbye vibe-coding. Hello, <strong>specification-driven development</strong>: a systematic approach that treats AI collaboration as serious software engineering. Instead of random prompts, we have structured conversations. Instead of disconnected code snippets, we build comprehensive solutions. Instead of starting with implementation, we start with understanding.</p>
<h2 id="heading-the-foundation">The Foundation</h2>
<p>Before diving into my three-phase workflow, let me introduce the foundation that makes everything possible: <strong>Project Intelligence</strong>. Think of it as giving Amazon Q Developer a persistent memory about your project.</p>
<p>AI assistants have a fundamental limitation—their memory resets between sessions. Imagine if your human colleagues forgot everything about your project every time they left for coffee. You'd spend more time re-explaining context than actually building software.</p>
<p>Project Intelligence solves this through a structured documentation system living in <code>.amazonq/project-intelligence/</code>. It consists of interconnected markdown files:</p>
<ul>
<li><p><code>projectbrief.md</code> - The foundation defines core requirements and goals</p>
</li>
<li><p><code>productContext.md</code> - Why the project exists and problems it solves</p>
</li>
<li><p><code>systemPatterns.md</code> - Architecture and design patterns in use</p>
</li>
<li><p><code>techContext.md</code> - Technologies, setup, and constraints</p>
</li>
<li><p><code>activeContext.md</code> - Current work focus and recent changes</p>
</li>
<li><p><code>progress.md</code> - What works, what's left, current status</p>
</li>
</ul>
<p>These files create context for both humans and AI. When I start a session with Amazon Q Developer, I can reference this documentation using context modifiers, like <code>@folder</code> in the IDE chat or <code>/context</code> in the CLI chat. Suddenly, Q Developer understands what I'm asking for and why I'm asking for it within the broader project context.</p>
<p>Project Intelligence is highly inspired by what Cline calls “memory bank”. It isn’t just an AI assistant-specific feature - it’s a methodology for managing AI context through structured documentation.</p>
<h2 id="heading-my-current-workflow-june-2025">My current workflow (June 2025)</h2>
<p>Let me walk you through exactly how this works by showing you on an active developed feature to improve tag management and discovery in <a target="_blank" href="https://www.promptz.dev/">promptz.dev</a>—the community platform I built for sharing Amazon Q Developer prompts.</p>
<h3 id="heading-phase-1-specification">Phase 1: Specification</h3>
<p>I start every feature with a conversation. But this isn't just any conversation—it's a structured engineering discussion using proven prompt frameworks to write a technical specification.</p>
<p>Writing technical specifications isn't just good practice—it's fundamental software engineering. As engineers, our primary role is to solve technical problems. Technical specifications help you think through problems systematically. They increase the chances of building something that matters, building “the thing” right. They force you to consider edge cases, define success criteria, and understand dependencies before you write your first line of code.</p>
<p>The challenge? You might find writing specifications frustrating, especially if you don't consider yourself a strong technical writer. This is where AI assistants shine—they can help reduce this productivity friction and make you more likely to write specifications at all.</p>
<p>Here's the exact prompt I use to kick off the specification phase, available also on <a target="_blank" href="https://www.promptz.dev/prompts/prompt/specifications-for-development-tasks-24b69aa8">promptz.dev</a> for a full preview:</p>
<pre><code class="lang-markdown">You are acting as an experienced software engineer and technical writer. Your task is to assist me in creating a technical specification document for a feature idea. To complete the task, you must
<span class="hljs-bullet">-</span> Read ALL files in the .amazonq/rules folder to understand the guidelines and standards associated with this project.
<span class="hljs-bullet">-</span> Read ALL files in the project-intelligence folder to understand the project and the associated problem domain.
<span class="hljs-bullet">-</span> Keep asking relevant questions until you have gathered all relevant information and requirements.
<span class="hljs-bullet">-</span> Compile your findings into a comprehensive, developer-ready technical specification.

Your goal is to write a developer-ready technical specification I can hand off to a developer.

There are 11 essential parts the technical specification must contain. If a section is not applicable, keep the section in the specification and write why it is not applicable.

<span class="hljs-bullet">1.</span> Summary: Brief overview of the feature, the user problem it solves, and the proposed solution...
<span class="hljs-bullet">2.</span> Goals: What the feature wants to achieve (goals) and what is explicitly out of scope (non-goals)...
[continues with all 11 sections]
</code></pre>
<p>The key insight is treating Amazon Q Developer like an engineering partner. I don't just say "add better tag support"—I have the same type of technical discussion I'd have with a human peers:</p>
<ul>
<li><p>What problem are we solving?</p>
</li>
<li><p>Who are the users, and what do they need?</p>
</li>
<li><p>What are the technical constraints?</p>
</li>
<li><p>How does this fit into our existing architecture?</p>
</li>
<li><p>What are the success criteria?</p>
</li>
</ul>
<p>Amazon Q Developer, armed with the Project Intelligence context, asks probing questions:</p>
<ul>
<li><p>"How should the tag filtering interact with the existing search functionality?"</p>
</li>
<li><p>"Do we need to maintain backward compatibility with current tags stored in the prompts table?"</p>
</li>
<li><p>"What's the expected scale—hundreds or thousands of tags?"</p>
</li>
</ul>
<p>This back-and-forth creates <a target="_blank" href="https://github.com/cremich/promptz/blob/feature/tagging/docs/specs/tag-management/spec.md">the specification document</a>. Not just requirements, but technical architecture, data models, error handling strategies, and testing approaches. Whatever you need as a developer to implement the feature correctly.</p>
<p><strong>Key highlights from the specification</strong> <a target="_blank" href="https://github.com/cremich/promptz/blob/feature/tagging/docs/specs/tag-management/spec.md"><strong>you can read on github</strong></a><strong>:</strong></p>
<ul>
<li><p><strong>Clear problem definition</strong>: "Currently, Promptz uses a static list of tags with limited query capabilities, hampering user experience."</p>
</li>
<li><p><strong>Specific goals and non-goals</strong>: What we're building and explicitly what we're not</p>
</li>
<li><p><strong>Detailed data model</strong> with entity relationship diagrams using Mermaid syntax</p>
</li>
<li><p><strong>Migration strategy</strong> that maintains backward compatibility</p>
</li>
<li><p><strong>Risk assessment</strong> with mitigation strategies</p>
</li>
<li><p><strong>Acceptance criteria</strong> that define exactly what "done" looks like</p>
</li>
</ul>
<p>The specification covers aspects from user stories to performance requirements to open questions that need stakeholder input. This isn't just a feature request—it's a complete engineering blueprint that guides every subsequent decision, waiting for you to be fine-tuned.</p>
<h3 id="heading-phase-2-implementation-plan">Phase 2: Implementation Plan</h3>
<p>With a solid specification, I move to the second phase: creating a step-by-step implementation blueprint. This is where I break down the complex feature into manageable, testable chunks.</p>
<p>Again, this isn't revolutionary—it's classical software engineering. Breaking complex tasks into smaller, iterative pieces that build on each other is how we've always approached large problems. It's the foundation of agile methodologies, sprint planning, and virtually every successful software project. The principle of "divide and conquer" has been core to engineering for decades.</p>
<p>What's different isn't the approach—it's the tool. Instead of manually decomposing features in planning meetings or writing user stories by hand, I'm collaborating with Amazon Q Developer to create a structured implementation plan. Steered correctly, the AI can bring the same systematic thinking we've always valued, but with the speed and consistency that human planning sometimes lacks.</p>
<p>Here's my prompt for the <a target="_blank" href="https://www.promptz.dev/prompts/prompt/implementation-plan-18ecdf89">implementation planning phase</a>:</p>
<pre><code class="lang-markdown">You are acting as an experienced software engineer. Your task is to create a detailed, step-by-step implementation plan. To complete the task you must

<span class="hljs-bullet">-</span> read ALL files in the .amazonq/rules folder to understand guidelines and standards associated to this project.
<span class="hljs-bullet">-</span> read ALL files in the project-intelligence folder to understand the the project and the associated problem domain.
<span class="hljs-bullet">-</span> read the feature specification.
<span class="hljs-bullet">-</span> define a solid implementation plan.
<span class="hljs-bullet">-</span> break it down into small, iterative chunks that build on each other.
<span class="hljs-bullet">-</span> review the results and make sure that the steps are small enough to be implemented safely with strong testing, but big enough to move the project forward.
<span class="hljs-bullet">-</span> iterate until you think that the steps are right-sized for this project.

Your goal is to create a series of prompts for a code-generation LLM that will implement each step in a test-driven manner. The prompts should be structured using the RISEN framework. Use the following prompt template for each prompt:

You are acting as [insert the role you want AI to take]. 
Your task is to [insert the main task you want AI to complete]. 
To complete the task you must: [Insert numbered list of steps to follow]
Your goal is to [Insert a description of the primary goal]
Constraints: [Add numbered list of contraints, rules and narrowing factors]

Save the implementation plan as <span class="hljs-code">`prompt_plan.md`</span> next to the feature specification file.

For each prompt ensure, that it contains a step to read all files in the .amazonq/rules folder to understand the guidelines and standards.
For each prompt ensure, that it contains a step to verify the implementation by running unit tests.
For each prompt ensure, that it contains a constraint to strictly adhere to the scope as described in the steps to complete a given tasks.
Make sure that each prompt builds on the previous prompts.
Format each prompt as plaintext codeblock.
Use markdown.
</code></pre>
<p>Amazon Q Developer reads the specification and creates a detailed prompt plan—essentially a series of prompts using the structured <a target="_blank" href="https://community.aws/content/2mzFQJXPQaF5iATLz1KKAQd9JeT/considerations-for-effective-prompts-engineering-and-prompt-frameworks#3-the-risen-framework">RISEN framework</a>. Each prompt builds on the previous one:</p>
<ol>
<li><p><strong>Project Setup</strong>: Initialize tests and basic data structures</p>
</li>
<li><p><strong>Core Tag Model</strong>: Implement tag entity with validation</p>
</li>
<li><p><strong>Tag Service Layer</strong>: Add business logic for tag operations</p>
</li>
<li><p><strong>UI Components</strong>: Build reusable tag display and filter components</p>
</li>
<li><p><strong>Integration</strong>: Wire everything together with proper error handling</p>
</li>
<li><p><strong>Testing</strong>: Comprehensive test coverage for each layer</p>
</li>
</ol>
<p>If you manage your context correctly, each step includes specific context references to our Project Intelligence and previous implementation steps. This can be achieved either manually by adding instructions to your prompt to read the project intelligence or by adding the aforementioned context management features of Q Developer.</p>
<h3 id="heading-phase-3-implementation-execution">Phase 3: Implementation Execution</h3>
<p>Finally, we are heading to the execution phase, where I work through each prompt in the implementation plan. But here's the crucial difference from vibe coding: every interaction is contextual and purposeful.</p>
<p>Instead of asking Amazon Q Developer to "build a tag system," I'm having focused conversations:</p>
<p>Each response builds on the established context. Amazon Q Developer understands:</p>
<ul>
<li><p>The existing codebase patterns</p>
</li>
<li><p>The specific requirements from our specification</p>
</li>
<li><p>How the step fits into the larger feature</p>
</li>
<li><p>The testing and quality standards we maintain</p>
</li>
</ul>
<p>The result? Collaboration with AI assistants that feels like software engineering.</p>
<h2 id="heading-the-mental-shift">The Mental Shift</h2>
<p>You're no longer just writing code (who said that software engineering was all about just writing code?)—you're orchestrating a sophisticated engineering process where AI handles the heavy lifting while you focus on the high-level thinking.</p>
<p>My time allocation shifted dramatically:</p>
<ul>
<li><p><strong>More time</strong> writing specifications, documenting architecture decisions, and maintaining project context</p>
</li>
<li><p><strong>More time</strong> verifying outputs, ensuring consistency, and making strategic technical decisions</p>
</li>
<li><p><strong>Less time</strong> on repetitive coding tasks, debugging syntax errors, and researching implementation details</p>
</li>
</ul>
<p>But here's the paradox: by spending more time on documentation and planning, I want to ship features faster. Better context leads to better AI output, which leads to fewer iterations and less debugging while still being in the lead, being part of the solution, being the solution, not the problem.</p>
<h2 id="heading-the-bigger-picture">The Bigger Picture</h2>
<p>Perhaps the most surprising discovery was how this approach brutally enforces good engineering practices. Documentation isn't optional when your AI partner needs context to function effectively. Architecture decisions can't be tribal knowledge when you need to communicate them clearly to an AI system. Who does not celebrate well-crafted and accessible documentation and decisions?</p>
<p>Is AI-assisted software engineering maybe the best thing to happen to our industry in decades? Not because it writes code for us, but because it forces us to practice actual engineering instead of just hacking together solutions.</p>
<p>The benefits I see over time:</p>
<ul>
<li><p><strong>New team members onboard faster</strong> because documentation exists and is accessible.</p>
</li>
<li><p><strong>AI tools become not just tools but partners</strong> because they understand the project context.</p>
</li>
<li><p><strong>An entire workflow evolves with clarity</strong> because decisions are documented and software engineering practices are established independently of the toolset.</p>
</li>
<li><p><strong>Technical debt decreases</strong> because specifications catch design issues before implementation, either by yourself, your peers, or your AI assistant.</p>
</li>
</ul>
<h2 id="heading-getting-started-your-first-specification-driven-feature-with-amazon-q-developer">Getting Started: Your First Specification-Driven Feature with Amazon Q Developer</h2>
<p>Ready to try this approach? Start small:</p>
<ol>
<li><p><a target="_blank" href="https://www.promptz.dev/rules/rule/project-intelligence-dbd52e23"><strong>Set up Project Intelligence</strong></a> for an existing project—even just the core files will make a difference</p>
</li>
<li><p><strong>Pick a simple feature</strong> you've been meaning to build</p>
</li>
<li><p><strong>Resist the urge to code immediately</strong>—spend 30 minutes having a <a target="_blank" href="https://www.promptz.dev/prompts/prompt/specifications-for-development-tasks-24b69aa8">specification conversation</a> with Amazon Q Developer first. I prefer to use the CLI, but you can try it out with other modalities that provide agentic behaviour.</p>
</li>
<li><p><strong>Document everything</strong>—treat each conversation as valuable project knowledge</p>
</li>
<li><p><strong>Build iteratively</strong>—follow your <a target="_blank" href="https://www.promptz.dev/prompts/prompt/implementation-plan-18ecdf89">implementation plan</a> step by step</p>
</li>
</ol>
<p>You might feel like you're moving slower at first. That's normal. You're trading the illusion of speed (vibe coding) for actual engineering velocity.</p>
<h2 id="heading-the-future-of-software-engineering">The Future of Software Engineering</h2>
<p>Specification-driven development isn't just about working better with AI—it's about returning to the fundamentals of software engineering while leveraging AI as a force multiplier.</p>
<p>We're not replacing human judgment with artificial intelligence. We're augmenting human engineering with AI assistance.</p>
<p>The future belongs to developers who can bridge the gap between human creativity and AI capability. The question isn't whether AI will change software engineering—it's whether you'll lead that change or just ride along.</p>
<hr />
<p><em>Want to experience this approach firsthand? Check out</em> <a target="_blank" href="https://promptz.dev/"><em>promptz.dev</em></a> <em>for a collection of Amazon Q Developer prompts, including the specification and implementation prompts I use in my workflow. The platform itself was built using the exact approach described in this post.</em></p>
<p><em>Have questions about specification-driven development? I'd love to continue the conversation—</em><a target="_blank" href="https://www.linkedin.com/in/christian-bonzelet/"><em>reach out</em></a> <em>and share your experiences with AI-assisted software engineering.</em></p>
]]></content:encoded></item><item><title><![CDATA[Supercharging the Amazon Q Developer Dev Agent]]></title><description><![CDATA[Introduction
The most powerful solutions often emerge from unexpected discoveries. Recently, while working on a feature for promptz.dev, I stumbled upon a technique that dramatically improved my interaction with Amazon Q Developer. This discovery not...]]></description><link>https://cremich.cloud/supercharging-the-amazon-q-developer-dev-agent</link><guid isPermaLink="true">https://cremich.cloud/supercharging-the-amazon-q-developer-dev-agent</guid><category><![CDATA[AWS]]></category><category><![CDATA[Amazon Q]]></category><category><![CDATA[generative ai]]></category><category><![CDATA[AI Coding Assistant]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Fri, 24 Jan 2025 21:15:34 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737753314686/35193ed1-8c44-48ad-9240-5e9a7f70b40b.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>The most powerful solutions often emerge from unexpected discoveries. Recently, while working on a feature for <a target="_blank" href="https://promptz.dev">promptz.dev</a>, I stumbled upon a technique that dramatically improved my interaction with Amazon Q Developer. This discovery not only enhanced the quality of the generated code but also made the entire development process feel more natural and effective.</p>
<p>Like many developers, I initially approached Amazon Q's dev agent with straightforward prompts, expecting it to understand my requirements immediately. The results were mixed – sometimes spot-on, other times missing crucial context that seemed obvious to me but wasn't explicitly stated in my prompts.</p>
<p>The breakthrough came trying a different approach. Instead of jumping straight to the dev agent, I found myself in a natural conversation with Q Developer about the feature requirements. As the discussion evolved, something fascinating happened.</p>
<h2 id="heading-the-context-challenge">The Context Challenge</h2>
<p>As developers embrace AI coding assistants, we often overlook a fundamental truth about these tools: they're only as good as the context we provide them. During my work with Amazon Q Developer and Generative AI in general, I've come to appreciate that context isn't just helpful—it's essential for generating reliable, and accurate outputs.</p>
<p>The relationship between input and output in AI coding assistants follows a simple yet powerful formula as <a target="_blank" href="https://dev.to/aws/amazon-q-developer-tips-no8-understanding-context-2305">described by Ricardo Suieras</a> in his awesome <a target="_blank" href="https://dev.to/aws/amazon-q-developer-tips-25-tips-to-supercharge-your-development-2ffg">series of Amazon Q Developer Tips</a>:</p>
<blockquote>
<p><strong>Prompt + Context = Output</strong></p>
</blockquote>
<p>While this equation might seem obvious, its implications run deep. What fascinates me most is how context reaches the AI assistant through both explicit and implicit channels.</p>
<p><strong>Explicit Context</strong> is what we consciously provide:</p>
<ul>
<li><p>Project requirements in our prompts</p>
</li>
<li><p>Code snippets we share</p>
</li>
<li><p>Architecture decisions we explain</p>
</li>
<li><p>Business rules we define</p>
</li>
</ul>
<p><strong>Implicit Context</strong> comes from how the tools themselves work. <a target="_blank" href="https://www.youtube.com/watch?v=ovo0A8AqSIc">The Amazon Q dev agent</a>, for example, is equipped with “tools” to explore files, search files, modify files, add, or remove files, or undo previous changes in an internal text-based IDE. The agent selects the tool and uses the tool on the environment - as of today limited to the source code repository.</p>
<p>Think of context as a two-way street. What you tell the AI assistant directly is just as important as what it can discover through its built-in capabilities. Understanding this dual nature of context has transformed how I approach AI-assisted development. I've learned to be deliberate about explicit context while leveraging the tool's implicit context-gathering capabilities.</p>
<p>A well-crafted prompt alone isn't enough to generate high-quality code. Without proper context, even the most carefully written prompt can lead to implementations that miss critical requirements or fail to align with existing system architecture.</p>
<p>Expecting the AI to understand our codebase's conventions without explicitly sharing them, and providing fragments of what we need while keeping crucial details in our heads are typical anti-patterns I encounter that I recognize as context-related issues.</p>
<h2 id="heading-a-new-approach-the-chat-to-agent-technique">A New Approach: The Chat-to-Agent Technique</h2>
<p>I've discovered that the most effective way to leverage its capabilities is to mirror how we naturally collaborate with our human peers. Instead of jumping straight into implementation with the dev agent, I start with a conversation.</p>
<h3 id="heading-start-with-conversation">Start with Conversation</h3>
<p>The key to successful AI-assisted development lies in how we initiate the dialogue. When explaining a new feature to a peer engineer, we don't start with implementation details – we begin with the problem we're trying to solve. Here's how I started the conversation about a feature for <a target="_blank" href="http://promptz.dev">promptz.dev</a> that should allow users to mark submitted prompts as their favorites:</p>
<blockquote>
<p>As more and more prompts are being submitted it gets harder for users to discover relevant prompts for their use-cases. What changes needs to be implemented in this @workspace to allow users mark prompts as favorites?"</p>
</blockquote>
<p>This open-ended question led to a natural exploration of UX considerations, data model implications, performance requirements, and implementation constraints.</p>
<blockquote>
<p>💡 Pro Tip: Use the @workspace context modifier in your initial question. This automatically includes relevant chunks of your workspace code as context.</p>
</blockquote>
<h3 id="heading-clarifying-requirements-through-dialogue">Clarifying Requirements Through Dialogue</h3>
<p>The beauty of this conversational approach is how naturally it surfaces important considerations. The conversation evolved organically. The initial proposal of Q Developer had some pitfalls that I wanted to clarify, so I asked:</p>
<blockquote>
<p>If multiple users would favorite the same prompt at the same time, wouldn't this result in data inconsistencies?</p>
</blockquote>
<p>This led to a deeper discussion about, race conditions in concurrent operations, the need for atomic updates, data consistency guarantees, and alternative implementation approaches. This mirrors how technical discussions flow in real engineering teams, where requirements and constraints emerge through dialogue rather than being fully formed from the start.</p>
<h3 id="heading-crafting-the-perfect-dev-agent-prompt">Crafting the Perfect Dev Agent Prompt</h3>
<p>Once the discussion clarified all aspects of the feature, Q Developer demonstrated another powerful capability – generating an optimal prompt for its dev agent based on our conversation. The resulting prompt was remarkably precise, incorporating all the nuances and edge cases we'd discussed.</p>
<blockquote>
<p>💡 Pro Tip: Don't rush to implementation. Let the conversation continue until you see the requirements crystallize into a clear implementation path.</p>
</blockquote>
<p>This natural progression from conversation to implementation helps maintain alignment between business requirements and technical solutions throughout the development process.</p>
<h2 id="heading-a-real-world-implementation">A Real-World Implementation</h2>
<p>Let me walk you through how this chat-first technique transformed a seemingly simple feature request into a robust implementation. The initial feature request for <a target="_blank" href="http://promptz.dev">promptz.dev</a> was straightforward: allow users to mark prompts as favorites.</p>
<h3 id="heading-initial-approach-vs-chat-first-method">Initial Approach vs. Chat-First Method</h3>
<p>My initial attempt was typical of how I approached the dev agent until now. Jump straight to the dev agent, provide a basic feature description, and start implementing right away. Here's what I initially sent to the dev agent:</p>
<pre><code class="lang-markdown">With a growing number of prompts, finding a favorite prompt will become more cumbersome. 
Users should be able to mark a prompt as favorite. A new page "Favorites" should allow 
users to browse through all prompts they marked as favorites.
</code></pre>
<p>The result? Functional code, but missing crucial elements. The Dev Agent did not understand that PROMPTZ is built on top of AWS Amplify Gen2 and suggested a brand new GraphQL schema file to implement the data model. The implementation also misses crucial elements like atomic operations for concurrent updates. As the data model implementation was not accurate, data fetching implementations built on top were factually incorrect.</p>
<p>Using the chat-first technique, the conversation naturally surfaced critical functional and non-functional requirements that were discovered. This is the prompt that Amazon Q Developer created for me:</p>
<pre><code class="lang-markdown">Here's a simplified prompt focusing only on the core favorites functionality:

"Implement a feature that allows users to mark prompts as favorites and view their 
favorited prompts. Here are the specific requirements:

<span class="hljs-bullet">-</span> Update the schema in <span class="hljs-code">`amplify/data/resource.ts`</span> to add:
<span class="hljs-code">```
favorite: a.model({
  id: a.id().required(),
  prompt: a.belongsTo('prompt'),
  user_id: a.string().required(),
  created_at: a.datetime().required(),
})
```</span>
Copy

<span class="hljs-bullet">-</span> Add appropriate authorization rules to allow users to manage only their own favorites
<span class="hljs-bullet">-</span> Add a hasMany relationship from prompt to favorites

Create repository methods in <span class="hljs-code">`PromptRepository`</span>:
<span class="hljs-bullet">-</span> toggleFavorite(promptId: string, userId: string): Promise<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">void</span>&gt;</span></span>
<span class="hljs-bullet">-</span> listFavorites(userId: string): Promise<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">PromptViewModelCollection</span>&gt;</span></span>
<span class="hljs-bullet">-</span> isFavoritedByUser(promptId: string, userId: string): Promise<span class="xml"><span class="hljs-tag">&lt;<span class="hljs-name">boolean</span>&gt;</span></span>

Create a new page component pages/favorites.tsx that:
<span class="hljs-bullet">-</span> Lists all prompts marked as favorites by the current user
<span class="hljs-bullet">-</span> Reuses the existing PromptCollection component
<span class="hljs-bullet">-</span> Shows appropriate loading and error states
<span class="hljs-bullet">-</span> Add a favorite toggle button to the existing Prompt component that:
<span class="hljs-bullet">-</span> Shows whether the current user has favorited the prompt
<span class="hljs-bullet">-</span> Allows toggling the favorite status
<span class="hljs-bullet">-</span> Provides visual feedback during the toggle action
<span class="hljs-bullet">-</span> Add a navigation link to the Favorites page in the app's main navigation

Focus on:
<span class="hljs-bullet">-</span> Clean separation of concerns
<span class="hljs-bullet">-</span> Proper error handling
<span class="hljs-bullet">-</span> Loading states for better UX
<span class="hljs-bullet">-</span> Reusing existing components where possible
<span class="hljs-bullet">-</span> Type safety throughout the implementation

Please analyze the existing codebase and provide a step-by-step implementation plan 
before making any changes."
</code></pre>
<p>The implementation plan that emerged was comprehensive and production-ready. Here's what made the difference in my observation:</p>
<ul>
<li><p><strong>Accurate Data Model Evolution:</strong> The discussion and associated context led to a proper data model based on AWS Amplify Gen2 including the correct authorization mechanism.</p>
</li>
<li><p><strong>Better modularization:</strong> The dev agent encapsulates logic in new react components like a `FavoriteToggle.tsx` component and reflects the current logic of implementing data fetching using react hooks.</p>
</li>
</ul>
<p>The most striking difference between the two approaches wasn't just in the code quality - it was in the completeness of the solution. The chat-first approach surfaced edge cases early in the development process, led to better design decisions, and produced better maintainable code.</p>
<p>I guess I would have received similar results by providing the dev agent with more feedback after the first implementation plan. However, with the chat-to-agent approach, I was able to shift important reasoning steps from the agent to me using the chat and explicitly provided context - making me the human in the lead. That drastically reduced my time to first commit to getting this feature in production.</p>
<p>The key elements that made the prompt effective were clear specifications and precision. You can do a quick litmus test and ask yourself: <strong><em>“If I were an engineer, what would help me to implement this feature? The version that Amazon Q Developer created, or my initial naive approach below?”</em></strong></p>
<h2 id="heading-looking-forward">Looking Forward</h2>
<p>During my recent conversation with Ricardo Sueiras from AWS, I learned that he follows a similar pattern. His workflow involves capturing chat outputs into local workspace files, which he then references in his dev agent prompts or uses as implicit context using the `@context` modifier. This manual approach works because the dev agent is already equipped with file-related tools that can explore and understand the workspace context.</p>
<p>This leads me to an interesting question: <strong>What if the Amazon Q Developer agents could automatically access and understand the conversation history when invoked within a chat?</strong> This capability would transform the chat-to-agent technique into a seamless, integrated workflow.</p>
<p>Imagine combining this contextual understanding with other powerful features of Amazon Q Developer:</p>
<ul>
<li><p>Using chat context to generate more accurate unit tests using the test agent.</p>
</li>
<li><p>Leveraging discussion history for better code reviews using the review agent.</p>
</li>
<li><p>Enhancing documentation generation with conversational insights using the doc agent.</p>
</li>
</ul>
<p>Looking ahead, I envision AI coding assistants that seamlessly blend conversation and implementation, much like pair programming with a highly capable colleague. The more we can bridge the gap between human communication and machine understanding, the more powerful our development workflows will become.</p>
<p>This isn't just about writing better code—it's about transforming how we interact with AI development tools to create better software solutions.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>What started as a simple experiment with the favorites feature for <a target="_blank" href="https://promptz.dev">promptz.dev</a> revealed a powerful technique that I will try in more scenarios in the future. The power of this approach lies in its naturalness. By mirroring how we actually work with our peers, we might:</p>
<ul>
<li><p>Build richer context through organic conversation</p>
</li>
<li><p>Surface critical requirements before implementation</p>
</li>
<li><p>Leverage both explicit and implicit context effectively</p>
</li>
<li><p>Create more maintainable and production-ready code</p>
</li>
</ul>
<p>I encourage you to try this technique in your development workflow. Start a conversation with Amazon Q Developer about your next feature. Challenge its initial suggestions, explore edge cases, and see how the dialogue shapes the final implementation.</p>
<p>Share your experiences and insights with me and the wider AWS community. How does this approach work for your use cases? What patterns have you discovered?</p>
]]></content:encoded></item><item><title><![CDATA[Video Processing on AWS]]></title><description><![CDATA[Introduction
Building video processing solutions used to be a complex endeavor, requiring deep expertise in encoding, streaming protocols, and infrastructure management. I've witnessed a significant shift in this landscape, particularly through my re...]]></description><link>https://cremich.cloud/video-processing-on-aws</link><guid isPermaLink="true">https://cremich.cloud/video-processing-on-aws</guid><category><![CDATA[AWS]]></category><category><![CDATA[media]]></category><category><![CDATA[video streaming]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Fri, 24 Jan 2025 11:46:38 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1737711648198/a1c8267f-8a80-42d5-badc-f0bbe53e7e5a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Building video processing solutions used to be a complex endeavor, requiring deep expertise in encoding, streaming protocols, and infrastructure management. I've witnessed a significant shift in this landscape, particularly through my recent work on a proof-of-concept for Bundesliga. What struck me most wasn't the technical complexities – it was the realization that AWS has fundamentally transformed video processing into a commodity.</p>
<p>AWS provides the required building blocks to build your video-on-demand platform, implement live streaming, or modernize existing broadcast workflows. The key is knowing how to arrange these blocks effectively.</p>
<p>In this post, I'll walk you through the fundamentals of video processing on AWS, from initial considerations to service selection. I'll share insights from real implementations, helping you navigate the decision-making process with confidence. My goal isn't to dive deep into technical specifics – that's a topic for future posts – but rather to provide you with a clear framework for building video processing solutions on AWS.</p>
<blockquote>
<p>💡 <strong>Pro Tip:</strong> Before diving into technical implementations, align with your stakeholders on key requirements. Understanding your needs around quality, latency, and reliability will guide your architectural decisions more effectively than starting with technical specifications.</p>
</blockquote>
<h2 id="heading-understanding-your-readiness">Understanding Your Readiness</h2>
<p>When I started working on video processing solutions, I quickly learned that success depends more on understanding your requirements than mastering technical complexities. Let me share the key readiness factors I have learned about when building video solutions on AWS. Each of these factors introduces trade-offs, and given your specific use-case, some might be more important than others. The key is finding the right balance that serves your business needs while staying within technical and budgetary constraints.</p>
<p><strong>Level of Expertise:</strong> Even with AWS making video processing a commodity, having basic knowledge about video processing concepts helps in making informed decisions. Do you have in-house AWS and video streaming expertise along with an existing AWS support plan?</p>
<p><strong>Budget Considerations:</strong> Data transfer out of the cloud is a major cost driver along with the resources needed to process that data. Keep in mind that data transfer correlates with your ability to scale. The higher the number of viewers, the more data needs to be transferred. This is where content delivery networks like Amazon CloudFront become crucial for both performance and cost optimization.</p>
<p><strong>Video Quality Requirements:</strong> Higher video quality produces more data transfer and processing that might not be needed for all use-cases. Remember: not every stream needs 4K resolution. The key is finding the sweet spot between quality and cost that aligns with your viewers' expectations.</p>
<p><strong>Latency Expectations:</strong> Latency - the delay between a live action and the viewed image on the customer side - can be critical in applications like sporting events. In my work with Bundesliga, I learned that the appropriate level of latency depends on many factors. Understanding acceptable latency levels helps in choosing the right combination of AWS services.</p>
<p><strong>Reliability Requirements:</strong> Nobody wants to see a loading spinner when watching a video. Reliability can be achieved in multiple ways, up to creating multiple layers of redundancy - each with additional setup and costs. Your choice depends on how critical the video service is to your business operations.</p>
<p><strong>Workflow Enhancements:</strong> Think about your need for additional features like DRM encryption, captions and subtitles, or ad insertions. Each enhancement adds complexity and cost to your solution, so it's crucial to identify must-have features early in your planning process.</p>
<blockquote>
<p>💡 <strong>Pro Tip:</strong> These factors aren't independent - they interact with each other in ways that affect your overall solution. For example, higher quality requirements might impact both your budget and latency considerations.</p>
</blockquote>
<h2 id="heading-a-typical-video-processing-pipeline">A typical Video Processing Pipeline</h2>
<p>A typical video processing pipeline consists of four fundamental stages, independent of whether you are building a live-streaming or video-on-demand solution. Understanding these stages helps in making informed decisions about which AWS services to use and how to architect your solution effectively.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737712405601/4b8f02ff-3d93-4b3e-ab10-73d585668ae4.png" alt="A typical video processing pipeline consisting of foundamental stages. Ingest to get content into your processing. Process to make content consumable. Store to manage content lifecycle and Deliver to reach your audience. " class="image--center mx-auto" /></p>
<p><strong>Ingest - Getting Source Content into the System:</strong> This is where your content enters the video processing pipeline. For live streaming, this might be a camera. For video-on-demand, it could be a library of pre-recorded content. The key consideration here is ensuring reliable content acquisition while maintaining quality and managing costs.</p>
<p><strong>Process - Making Content Consumable:</strong> Raw video content often needs to be transformed before it can be efficiently delivered to viewers. This includes compression to reduce file sizes, transcoding to support different devices, and packaging content into formats suitable for streaming. The goal is to balance quality with efficient delivery.</p>
<p><strong>Store - Managing Content Lifecycle:</strong> Whether it's temporary storage for live streaming or long-term archival for video-on-demand, selecting the right content storage is crucial. This stage focuses on making content readily available while managing storage costs and ensuring content security.</p>
<p><strong>Deliver - Reaching Your Audience:</strong> The final stage involves getting your processed content to viewers efficiently and reliably. This includes content distribution strategies, handling varying network conditions, and ensuring smooth playback across different devices and locations.</p>
<blockquote>
<p>💡 <strong>Pro Tip:</strong> Think of these stages as building blocks rather than rigid boundaries. While each stage has its distinct purpose, they work together seamlessly in a well-designed video processing solution. The key is understanding how your specific requirements influence decisions at each stage.</p>
</blockquote>
<h2 id="heading-aws-services-for-building-video-processing-pipelines">AWS Services for Building Video Processing Pipelines</h2>
<p>After understanding the video processing pipeline stages, let's explore how AWS services map to each stage. My experience with Bundesliga taught me that AWS provides various services for different use cases that work seamlessly together. Let me guide you briefly through the services available at each stage.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1737719059426/37752649-60dc-40c9-a512-fec07c020a0e.png" alt="Mapping of AWS services to video processing pipeline stages." class="image--center mx-auto" /></p>
<blockquote>
<p>💡 <strong>Pro Tip:</strong> Don't feel overwhelmed by the number of services. Start with the core services that match your immediate needs and add capabilities as your requirements evolve. Some of these services also provide capabilities that span multiple stages. Keep this in mind to reduce complexity.</p>
</blockquote>
<h3 id="heading-stage-1-ingest"><strong>Stage 1: Ingest</strong></h3>
<p>Getting your content into AWS is the first crucial step. AWS offers several services depending on your specific requirements:</p>
<ul>
<li><p><strong>AWS Elemental Live:</strong> An on-premise encoder converting high-res live streams into network-friendly formats. Perfect for multi-camera events and live studio productions.</p>
</li>
<li><p><strong>AWS Elemental Link:</strong> When you need to ingest raw signals directly from cameras or video equipment. Think of it as your bridge between physical video equipment and the cloud.</p>
</li>
<li><p><strong>AWS Elemental MediaConnect:</strong> For secure and reliable transport of high-quality video streams. Ideal for enterprise-level streaming and B2B live transfers.</p>
</li>
<li><p><strong>AWS Direct Connect:</strong> Creates a dedicated network connection to AWS. Essential when reliability and bandwidth consistency are crucial.</p>
</li>
<li><p><strong>AWS Snowball:</strong> For massive video libraries that need migration to the cloud. Perfect for archive digitization projects.</p>
</li>
<li><p><strong>Amazon S3:</strong> Your scalable storage solution for video files. Used both as initial storage for uploaded content and throughout the pipeline.</p>
</li>
</ul>
<h3 id="heading-stage-2-process"><strong>Stage 2: Process</strong></h3>
<p>Processing transforms your content into viewer-ready formats and AWS offers the following options:</p>
<ul>
<li><strong>AWS Elemental MediaLive:</strong> Creates viewer-ready streams for different devices and bandwidths. Essential for live streaming scenarios.</li>
</ul>
<ul>
<li><p><strong>AWS Elemental MediaConvert:</strong> Your go-to service for file-based video transcoding. Ideal for video-on-demand content.</p>
</li>
<li><p><strong>AWS Elemental MediaPackage:</strong> Prepares video for delivery while adding enhanced features like time-shifted viewing.</p>
</li>
<li><p><strong>AWS Elemental MediaTailor:</strong> Handles personalized ad insertion. Perfect when monetization is key.</p>
</li>
</ul>
<h3 id="heading-stage-3-store"><strong>Stage 3: Store</strong></h3>
<p>Efficient storage is crucial for both live and on-demand content. <strong>Amazon S3</strong> became the de facto standard as a reliable, scalable storage for your media assets. A<strong>WS Elemental MediaPackage</strong> also has the capability to cache content ahead of viewer requests.</p>
<h3 id="heading-stage-4-deliver"><strong>Stage 4: Deliver</strong></h3>
<p>Getting content to viewers efficiently is the final piece:</p>
<ul>
<li><p><strong>AWS Elemental MediaConnect:</strong> For secure content distribution to partners or between regions.</p>
</li>
<li><p><strong>Amazon CloudFront:</strong> Your content delivery network, ensuring smooth playback globally.</p>
</li>
</ul>
<h2 id="heading-the-human-side-of-video-processing">The Human Side of Video Processing</h2>
<p>In my years working in the media and entertainment industry, I encountered an often-overlooked challenge that's crucial for successful implementation. Media and broadcast operators have their established workflows, tools, and interfaces. While AWS Elemental Media Services provide powerful capabilities, operators will rarely interact directly with the AWS Management Console to manage video processing pipelines.</p>
<p>The technical implementation of video processing on AWS is straightforward, but making it usable for media operators requires additional thought. These professionals are experts in their domain, working with specialized broadcast tools and interfaces they've mastered over the years.</p>
<p>The solution lies in thoughtful integration and automation. Rather than expecting operators to adapt to new interfaces, you need to adapt your AWS-based video processing pipelines to existing operational workflows. This might mean:</p>
<ul>
<li><p>Building custom control interfaces or if possible integrating with familiar broadcast tools.</p>
</li>
<li><p>Automating routine tasks through APIs and workflows.</p>
</li>
<li><p>Creating intuitive monitoring dashboards that speak the language of media operations.</p>
</li>
</ul>
<blockquote>
<p>💡 <strong>Pro Tip:</strong> When designing video processing solutions, consider your operators' daily workflows early in the process. The most technically sophisticated pipeline adds little value if it doesn't integrate smoothly with existing operational practices.</p>
</blockquote>
<p>Success still depends on understanding and addressing the human elements of media workflows.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Building video processing solutions on AWS is a journey of understanding both technical capabilities and human workflows.</p>
<p>The key to success lies in asking the right questions upfront, understanding your readiness factors, and carefully considering how your solution will integrate with existing operational practices. Remember, even the most sophisticated technical implementation needs to work harmoniously with your team's established workflows.</p>
<p>Looking ahead, you can expect me to share more detailed insights about specific aspects of video processing on AWS. Future posts will dive deeper into topics like livestreaming architectures, video-on-demand solutions, and operational best practices.</p>
]]></content:encoded></item><item><title><![CDATA[How Amazon Q Developer helped me to save days of work]]></title><description><![CDATA[Introduction
Recently, I had an eye-opening experience with Amazon Q Developer that I'm excited to share. In just 15 minutes, I created a Python script that crawls the entire official Bundesliga DataHub API – a task that would have typically taken on...]]></description><link>https://cremich.cloud/how-amazon-q-developer-helped-me-to-save-days-of-work</link><guid isPermaLink="true">https://cremich.cloud/how-amazon-q-developer-helped-me-to-save-days-of-work</guid><category><![CDATA[Amazon Q]]></category><category><![CDATA[#PromptEngineering]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Mon, 14 Oct 2024 13:18:20 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1728911767629/2a89e932-df36-4aaf-a45a-2b400005b360.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Recently, I had an eye-opening experience with Amazon Q Developer that I'm excited to share. In just 15 minutes, I created a Python script that crawls the entire official Bundesliga DataHub API – a task that would have typically taken one to two full days of work. Our team needed comprehensive sports data to enhance our AI services for content localization. The challenge was to efficiently gather data on seasons, competitions, clubs, players, and official team staff members from the Bundesliga, spanning from 1908 to 2024. This was no small feat, considering the vast amount of data and the complexity of the API structure.</p>
<p>In this blog post, I'll take you through my journey with Amazon Q Developer, from the initial prompt to the final, comprehensive Python script. I'll share the (few) challenges I faced and the key learnings from this experience. Get comfortable, for this 15-minute love story between a solution architect and an AI coding assistant.</p>
<h2 id="heading-why-is-a-solution-architect-collecting-sportsdata">Why is a Solution Architect collecting Sportsdata?</h2>
<p>One of our teams was tasked with a project requiring comprehensive data from the Bundesliga DataHub API. They needed everything from historical seasons dating back to 1908 up to the most recent 2024 data, including details about competitions, clubs, players, and official team staff members. In total, this amounted to 13,498 single API requests – a daunting task. I wanted to help the team by taking the lead on this.</p>
<p>My traditional approach would have involved using tools like Postman, which would require:</p>
<ol>
<li><p>Calling the APIs</p>
</li>
<li><p>Copy-pasting the responses</p>
</li>
<li><p>Saving the responses as files and sharing them with the team</p>
</li>
</ol>
<p>Conservatively, I estimated this would take 1-2 full days of focused work – a significant chunk of time, especially when juggling multiple projects and deadlines. Instead of falling into old habits, I turned to Amazon Q Developer. Could it help me automate this process? Would it understand the nuances of the Bundesliga sports data API? Could it handle and understand relationships in data? Most importantly, would it truly save me time, or would I end up spending hours correcting its output?</p>
<p>You might already guess how it ended, but please continue reading as I take you through this experience, step by step, and show you how Amazon Q Developer not only met but exceeded my expectations, proving that sometimes, the right tool can indeed feel like magic.</p>
<h2 id="heading-pair-programming-with-an-ai-assistant">Pair Programming with an AI Assistant</h2>
<p>With a clear understanding of the task at hand, the whole love story began with a simple prompt (sensitive information replaced with placeholders in all examples):</p>
<pre><code class="lang-plaintext">Assist me in writing a python script that crawls football sportdata 
from an API. The API provides several endpoints to access XML feeds 
for seasons, competitions, clubs and players.

The script should start with downloading all available seasons by 
making a GET request to this endpoint: {endpoint url}
</code></pre>
<p>The response from Amazon Q was a solid baseline script. Without further instructions, Amazon Q Developer made an educated guess on the potential data structure of the response. I helped Q out by providing an example of a response from our seasons endpoint. This is where the real magic began.</p>
<pre><code class="lang-plaintext">Update the script to parse the following example response from the API:
{example response}
</code></pre>
<p>Amazon Q Developer responded with an updated and precise version capable of handling the expected data structure correctly. I continued to add more instructions to expand the script's capabilities:</p>
<ul>
<li><p>Processing competitions data</p>
</li>
<li><p>Downloading club data for each competition and season</p>
</li>
<li><p>Implementing specific folder structures for data organization</p>
</li>
<li><p>Processing player and coach data</p>
</li>
<li><p>Downloading referee data</p>
</li>
</ul>
<p>The following prompt template was used to download and traverse various data types:</p>
<pre><code class="lang-plaintext">Update the script to also process and download the competitions
data using the competitions endpoint: {endpoint url}.

Here is an example response: 
{example response}
</code></pre>
<p>Without some further instructions from my side, Amazon Q Developer made a suggestion on how to organize the downloaded files. As the script evolved, I provided additional instructions for better organization of the output.</p>
<pre><code class="lang-plaintext">Do not include a timestamp in the filename. Just overwrite it.
Save the response in the following folder structure 
./out/&lt;seasonId&gt;/&lt;competitionId&gt;
</code></pre>
<p>When running the script, I recognized that combinations of seasons and competitions resulted in an error payload. Instead of having all these files with error payloads in my .out folder, I stopped the script for a moment and gave Amazon Q Developer feedback of this runtime behaviour:</p>
<pre><code class="lang-plaintext">Update the script to skip the processing in any case there is 
a Status node in the response. 

Having a status node in the response is indicating an error like 
missing permissions or unavailable data for the given combination.
</code></pre>
<p>As more and more data were crawled, I observed situations, that resulted in SSL related errors. Amazon Q Developer to the rescue, it was able to help me out and implement a retry logic with exponential backoff using this simple prompt:</p>
<pre><code class="lang-plaintext">The script is throwing the following error:
requests.exceptions.SSLError: 
HTTPSConnectionPool(host='{}', port=443): Max retries exceeded 
with url: {} (Caused by SSLError(SSLEOFError(8, 
'[SSL: UNEXPECTED_EOF_WHILE_READING] EOF occurred in violation 
of protocol (_ssl.c:1000)')))

How can I mitigate this?
</code></pre>
<p>After implementing a retry logic with exponential backoff to handle SSL-related errors, the script was complete. Had to remove the feed processing parts of the script due to security and compliance reasons, but this is what Amazon Q Developer created:</p>
<pre><code class="lang-python"><span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">fetch_xml_data</span>(<span class="hljs-params">url, max_retries=<span class="hljs-number">3</span>, backoff_factor=<span class="hljs-number">0.3</span></span>):</span>
    session = requests.Session()

    retries = Retry(total=max_retries,
                    backoff_factor=backoff_factor,
                    status_forcelist=[<span class="hljs-number">500</span>, <span class="hljs-number">502</span>, <span class="hljs-number">503</span>, <span class="hljs-number">504</span>])

    session.mount(<span class="hljs-string">'https://'</span>, HTTPAdapter(max_retries=retries))

    <span class="hljs-keyword">for</span> attempt <span class="hljs-keyword">in</span> range(max_retries + <span class="hljs-number">1</span>):
        <span class="hljs-keyword">try</span>:
            response = session.get(url, timeout=<span class="hljs-number">30</span>, verify=<span class="hljs-literal">False</span>)  <span class="hljs-comment"># Increased timeout and disabled SSL verification</span>
            <span class="hljs-keyword">if</span> response.status_code == <span class="hljs-number">200</span>:
                <span class="hljs-keyword">return</span> response.content
            <span class="hljs-keyword">else</span>:
                print(<span class="hljs-string">f"Failed to fetch data from <span class="hljs-subst">{url}</span>. Status code: <span class="hljs-subst">{response.status_code}</span>"</span>)
                <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>
        <span class="hljs-keyword">except</span> requests.exceptions.SSLError <span class="hljs-keyword">as</span> e:
            <span class="hljs-keyword">if</span> attempt &lt; max_retries:
                wait_time = backoff_factor * (<span class="hljs-number">2</span> ** attempt)
                print(<span class="hljs-string">f"SSL Error occurred. Retrying in <span class="hljs-subst">{wait_time:<span class="hljs-number">.2</span>f}</span> seconds..."</span>)
                time.sleep(wait_time)
            <span class="hljs-keyword">else</span>:
                print(<span class="hljs-string">f"Max retries exceeded. Failed to fetch data from <span class="hljs-subst">{url}</span>."</span>)
                print(<span class="hljs-string">f"SSL Error: <span class="hljs-subst">{str(e)}</span>"</span>)
                <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>
        <span class="hljs-keyword">except</span> requests.exceptions.RequestException <span class="hljs-keyword">as</span> e:
            print(<span class="hljs-string">f"An error occurred while fetching data from <span class="hljs-subst">{url}</span>: <span class="hljs-subst">{str(e)}</span>"</span>)
            <span class="hljs-keyword">return</span> <span class="hljs-literal">None</span>

<span class="hljs-comment"># Main function</span>
<span class="hljs-function"><span class="hljs-keyword">def</span> <span class="hljs-title">main</span>():</span>
    <span class="hljs-comment"># Fetch seasons data</span>
    seasons, seasons_root = fetch_and_process_data(SEASONS_ENDPOINT, process_seasons)
    <span class="hljs-keyword">if</span> seasons <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span>:
        print(<span class="hljs-string">"Failed to fetch seasons data. Exiting."</span>)
        <span class="hljs-keyword">return</span>

    <span class="hljs-comment"># Fetch competitions data</span>
    competitions, competitions_root = fetch_and_process_data(COMPETITIONS_ENDPOINT, process_competitions)
    <span class="hljs-keyword">if</span> competitions <span class="hljs-keyword">is</span> <span class="hljs-literal">None</span>:
        print(<span class="hljs-string">"Failed to fetch competitions data. Exiting."</span>)
        <span class="hljs-keyword">return</span>

    <span class="hljs-comment"># Process and save seasons and competitions data</span>
    save_xml_to_file(ET.tostring(seasons_root), <span class="hljs-string">f".out/seasons.xml"</span>)
    save_xml_to_file(ET.tostring(competitions_root), <span class="hljs-string">f".out/competitions.xml"</span>)

    <span class="hljs-comment"># Fetch and save club data for each season and competition</span>
    <span class="hljs-keyword">for</span> season <span class="hljs-keyword">in</span> seasons:
        <span class="hljs-keyword">for</span> competition <span class="hljs-keyword">in</span> competitions:
            club_endpoint = <span class="hljs-string">f"..."</span>
            clubs, clubs_root = fetch_and_process_data(club_endpoint, process_clubs)

            <span class="hljs-keyword">if</span> clubs <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-literal">None</span>:
                file_path = <span class="hljs-string">f".out/<span class="hljs-subst">{season[<span class="hljs-string">'id'</span>]}</span>/<span class="hljs-subst">{competition[<span class="hljs-string">'id'</span>]}</span>/clubs.xml"</span>
                save_xml_to_file(ET.tostring(clubs_root), file_path)

                print(<span class="hljs-string">f"\nClubs for Season <span class="hljs-subst">{season[<span class="hljs-string">'name'</span>]}</span> and Competition <span class="hljs-subst">{competition[<span class="hljs-string">'name'</span>]}</span>:"</span>)
                <span class="hljs-keyword">for</span> club <span class="hljs-keyword">in</span> clubs:
                    print(<span class="hljs-string">f"  <span class="hljs-subst">{club[<span class="hljs-string">'name'</span>]}</span> (ID: <span class="hljs-subst">{club[<span class="hljs-string">'id'</span>]}</span>)"</span>)
                    print(<span class="hljs-string">f"    Short Name: <span class="hljs-subst">{club[<span class="hljs-string">'short_name'</span>]}</span>"</span>)
                    print(<span class="hljs-string">f"    Type: <span class="hljs-subst">{club[<span class="hljs-string">'type'</span>]}</span>"</span>)

                    <span class="hljs-comment"># Fetch and save player data for this club</span>
                    player_endpoint = <span class="hljs-string">f"..."</span>
                    players, players_root = fetch_and_process_data(player_endpoint, process_players)

                    <span class="hljs-keyword">if</span> players <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-literal">None</span>:
                        player_file_path = <span class="hljs-string">f".out/<span class="hljs-subst">{season[<span class="hljs-string">'id'</span>]}</span>/<span class="hljs-subst">{competition[<span class="hljs-string">'id'</span>]}</span>/<span class="hljs-subst">{club[<span class="hljs-string">'id'</span>]}</span>/players.xml"</span>
                        save_xml_to_file(ET.tostring(players_root), player_file_path)

                        print(<span class="hljs-string">f"    Players:"</span>)
                        <span class="hljs-keyword">for</span> player <span class="hljs-keyword">in</span> players:
                            print(<span class="hljs-string">f"      <span class="hljs-subst">{player[<span class="hljs-string">'first_name'</span>]}</span> <span class="hljs-subst">{player[<span class="hljs-string">'last_name'</span>]}</span> (ID: <span class="hljs-subst">{player[<span class="hljs-string">'id'</span>]}</span>, Shirt: <span class="hljs-subst">{player[<span class="hljs-string">'shirt_number'</span>]}</span>)"</span>)
                        print(<span class="hljs-string">f"    Total players found: <span class="hljs-subst">{len(players)}</span>"</span>)
                    <span class="hljs-keyword">else</span>:
                        print(<span class="hljs-string">f"    Skipping player data for this club due to error or unavailable data."</span>)

                    <span class="hljs-comment"># Fetch and save team officials data for this club</span>
                    officials_endpoint = <span class="hljs-string">f"..."</span>
                    officials, officials_root = fetch_and_process_data(officials_endpoint, process_officials)

                    <span class="hljs-keyword">if</span> officials <span class="hljs-keyword">is</span> <span class="hljs-keyword">not</span> <span class="hljs-literal">None</span>:
                        officials_file_path = <span class="hljs-string">f".out/<span class="hljs-subst">{season[<span class="hljs-string">'id'</span>]}</span>/<span class="hljs-subst">{competition[<span class="hljs-string">'id'</span>]}</span>/<span class="hljs-subst">{club[<span class="hljs-string">'id'</span>]}</span>/officials.xml"</span>
                        save_xml_to_file(ET.tostring(officials_root), officials_file_path)

                        print(<span class="hljs-string">f"    Team Officials:"</span>)
                        <span class="hljs-keyword">for</span> official <span class="hljs-keyword">in</span> officials:
                            print(<span class="hljs-string">f"      <span class="hljs-subst">{official[<span class="hljs-string">'first_name'</span>]}</span> <span class="hljs-subst">{official[<span class="hljs-string">'last_name'</span>]}</span> (ID: <span class="hljs-subst">{official[<span class="hljs-string">'id'</span>]}</span>, Function: <span class="hljs-subst">{official[<span class="hljs-string">'function'</span>]}</span>)"</span>)
                        print(<span class="hljs-string">f"    Total team officials found: <span class="hljs-subst">{len(officials)}</span>"</span>)
                    <span class="hljs-keyword">else</span>:
                        print(<span class="hljs-string">f"    Skipping team officials data for this club due to error or unavailable data."</span>)

                    print(<span class="hljs-string">"  ---"</span>)
                print(<span class="hljs-string">f"Total clubs found: <span class="hljs-subst">{len(clubs)}</span>"</span>)
            <span class="hljs-keyword">else</span>:
                print(<span class="hljs-string">f"Skipping club data for Season <span class="hljs-subst">{season[<span class="hljs-string">'name'</span>]}</span> and Competition <span class="hljs-subst">{competition[<span class="hljs-string">'name'</span>]}</span> due to error or unavailable data."</span>)

<span class="hljs-keyword">if</span> __name__ == <span class="hljs-string">"__main__"</span>:
    main()
</code></pre>
<p>I had a timer running in parallel: In just 15 minutes, Amazon Q Developer and me had created a robust Python script capable of requesting, processing and downloading all relevant data feeds.</p>
<p>The script efficiently handled 13,498 single API requests, saving the responses in a structured format, and implementing error handling for various scenarios.</p>
<h2 id="heading-lessons-learned">Lessons Learned</h2>
<h3 id="heading-the-power-of-clear-communication"><strong>The power of clear communication</strong></h3>
<p>What struck me most was the accuracy and completeness of the code generated by Amazon Q Developer. I didn't need to make any manual modifications to the script. It was purely created through our back-and-forth conversation, with Amazon Q understanding the context and requirements perfectly. The quality of the output was directly related to the clarity and specificity of my prompts.</p>
<p>Clear, specific prompts play a crucial role in getting the desired output from Amazon Q Developer. When I provided detailed context and examples of API responses, the quality and accuracy of the generated code improved dramatically. Effective communication skills along domain expertise makes the difference when working with such AI tools.</p>
<p>For instance, when I shared an example of the API response structure, Amazon Q Developer was able to adapt the script precisely to handle the correct data structure. This iterative process of providing examples and getting refined code became the cornerstone of our collaboration.</p>
<pre><code class="lang-plaintext">Update the script to also process and download the competitions
data using the competitions endpoint: {endpoint url}.

Here is an example response: 
{example response}
</code></pre>
<p>Amazon Q Developer just needed one or two examples about the data structure the API returns. After that, Amazon Q was able to make a right guess on most of the following data structures of players, clubs and referees added as features afterwards.</p>
<h3 id="heading-it-is-an-iterative-development-not-a-single-prompt-shop"><strong>It is an iterative development not a single-prompt-shop</strong></h3>
<p>The success of this project was largely due to the iterative approach I took with Amazon Q Developer. Instead of expecting a perfect script on the first try, I built the functionality piece by piece, adding features incrementally. This approach allowed for better control and understanding of the process, and it helped in identifying and addressing potential issues early on. Each iteration brought us closer to the final product, with prompts like:</p>
<pre><code class="lang-plaintext">Update the script to process and download club data for every 
competition of every season using the club endpoint: {endpoint url}
</code></pre>
<p>Breaking down prompts into smaller, more manageable parts resulted in a more robust script but also allowed me to learn and adapt my interaction style with the AI assistant.</p>
<h3 id="heading-balancing-ai-assistance-with-human-oversight"><strong>Balancing AI assistance with human oversight</strong></h3>
<p>While Amazon Q Developer proved to be an incredibly powerful tool, this experience reinforced the importance of maintaining human oversight in the coding process. The AI was excellent at generating code based on my prompts, but it was my responsibility to ensure that the generated code aligned with our overall objectives and met our specific requirements. For example, when the AI made assumptions about error handling, I was able to provide more specific instructions.</p>
<pre><code class="lang-plaintext">Update the script to skip the processing in any case there is 
a Status node in the response. Independent of the status code. 
Having a status node in the response is indicating an error 
like missing permissions or unavailable data for the given combination.
</code></pre>
<h3 id="heading-a-matter-of-mindsets">A matter of mindsets</h3>
<p>My role shifted from writing code to problem-solving and providing context, allowing for a more strategic approach to development. Although Amazon Q Developer did the hard work of typing, my instructions and how I framed the problem made the difference. It's not about replacing human expertise but augmenting it, allowing us to focus on higher-level problem-solving and creativity. The code generated by Amazon Q Developer was a testament to the potential of AI's understanding of the context and requirements if we as engineers are able to shift mindsets.</p>
<p>As we continue to integrate AI tools into our development processes, it's crucial to cultivate this new mindset that balances technical expertise with the ability to effectively leverage AI capabilities.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In just 15 minutes, Amazon Q Developer proved its value as a powerful assistant, helping me automate a task that would have otherwise taken days. Through clear communication, an iterative approach, and a balance of AI assistance with human oversight, I was able to create a robust Python script to efficiently crawl the Bundesliga DataHub API.</p>
<p>There is a critical shift in how we approach software development. It's no longer just about writing code; it's about how effectively we can leverage AI to augment our capabilities. Amazon Q Developer didn't replace my expertise—it complemented it, enabling me to focus on problem-solving and strategy while the AI handled the heavy lifting.</p>
<p>As AI tools become more integral to our workflows, embracing this shift will not only boost productivity but also free up time for creativity, innovation (or drink more coffee with my engineer fellows). The key is to see AI as a partner, working alongside us to push the boundaries of what we can achieve. And when used effectively, it truly can feel like magic.</p>
]]></content:encoded></item><item><title><![CDATA[Don't let Technical Debt sabotage your product roadmap - Part 2]]></title><description><![CDATA[Introduction
In the first part of this blog series, we explored the hidden costs of technical debt, revealing its impact on development teams, businesses, and customers. We saw how low code quality leads to slower development, increased defects, and ...]]></description><link>https://cremich.cloud/step-by-step-business-case-refactoring</link><guid isPermaLink="true">https://cremich.cloud/step-by-step-business-case-refactoring</guid><category><![CDATA[refactoring]]></category><category><![CDATA[architecture]]></category><category><![CDATA[modernization]]></category><category><![CDATA[leadership]]></category><category><![CDATA[technical-debt]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Fri, 09 Aug 2024 10:19:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1723198723318/b31c4f5b-7f38-44f9-b749-7bfc114f9a9a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>In the <a target="_blank" href="https://cremich.cloud/dont-let-technical-debt-sabotage-your-product-roadmap">first part</a> of this <a target="_blank" href="https://cremich.cloud/series/technical-debt">blog series</a>, we explored the hidden costs of technical debt, revealing its impact on development teams, businesses, and customers. We saw how low code quality leads to slower development, increased defects, and unpredictable delivery timelines, ultimately hindering innovation and jeopardizing product roadmaps. We also discussed the ripple effects of technical debt, causing customer dissatisfaction, missed business opportunities, and decreased employee morale. The alarming statistics from research by CodeScene and Stripe further underscored the urgency of addressing technical debt, with developer inefficiency leading to a staggering $300 billion annual loss in global GDP and unplanned work consuming 23-42% of developer time.</p>
<p>In this second part, we shift our focus from identifying the problem to solving it. We will explore approaches to build a compelling business case for refactoring, empowering you to convince your management that investing in code quality is not just a technical necessity but a strategic business imperative. We'll cover key components of such a business case, including quantifying the impact of technical debt, translating technical jargon into the language of business, and addressing common objections to refactoring.</p>
<p>Let's resist the urge to immediately open our IDE and engage in random refactoring, only to succumb to pressure and return to feature development after a few hours. Instead, let's take a different approach to secure management buy-in this time.</p>
<h2 id="heading-step-by-step-towards-a-compelling-business-case-to-get-rid-of-your-technical-debt">Step-by-step towards a compelling business case to get rid of your technical debt</h2>
<p>The goal isn't just to refactor for cleaner code; it's about strategically aligning your technical efforts with your business objectives. In an ideal world, the decision to increase technical debt would be consciously aligned with a clear plan on how and when to pay it back. We see similarities in financial markets: you don't get a loan (financial debt) for real estate without negotiating the conditions for repayment. In practice, we're not quite there yet in software engineering and product development.</p>
<p>Unlike the fixed terms of financial obligations when buying real estate or an expensive car, software development, and business models are in constant flux. Hence, it's crucial to take extra steps and build a compelling business case that resonates with your management and paves the way for a healthier, more productive codebase given your product's and business's current situation.</p>
<h3 id="heading-step-1-understand-your-current-business-goals">Step 1: Understand your current business goals</h3>
<p>Why can't we just start refactoring? Shouldn't this be self-evident within a modern IT company? Perhaps it should be, but every company needs to decide how and where to invest their time and your precious skills as an engineer to achieve business goals. Here, you face a constant conflict balancing short-term ROI by implementing new features against long-term ROI by modernizing or cleaning up your code.</p>
<p>I once described it to a team this way, asking one of the engineers:</p>
<blockquote>
<p>Imagine you are our CEO and you have 100$ to spend making our business more successful. How would you decide?</p>
</blockquote>
<p>The answer was - and now we are talking:</p>
<blockquote>
<p>I would decide based on value for my company.</p>
</blockquote>
<p>What constitutes value for your company? Where does your company want to go in the next two or three years? What are the key business goals that your software is supposed to support? Is it about increasing revenue, improving customer satisfaction, or entering new markets?</p>
<p>It's crucial to understand the bigger picture – your company's strategic objectives – to create a solid argument for how to invest your $100. The scope of reducing technical debt heavily depends on what your business wants to achieve today and in the future. Keep in mind that markets, customers, customer problems, and markets change constantly – so too must the foundation and argumentation for reducing technical debt (as with building features) be evaluated continuously.</p>
<h3 id="heading-step-2-quantify-the-current-impact-of-technical-debt">Step 2: Quantify the current impact of technical debt</h3>
<p>To build a compelling business case, you need to quantify the current impact of technical debt on your organization.</p>
<p>An anti-pattern in quantifying the current impact is abstracting too much information. If you simply argue, "Our app is too slow, we have to make it faster," your business case lacks a solid foundation. Put yourself again in the shoes of your CEO. Would this information be enough to justify investing your current budget of $100? You could do this, of course, but the risk is high that you'll never see a return on that investment.</p>
<p>Your initiative to reduce technical debt will always compete with product enhancements. To increase your chances of winning this competition, I encourage you to shift gears and bring your argumentation to the same level as business stakeholders (ideally) use to convince management to invest in product enhancements. Gathering data and using it to demonstrate the tangible costs of inaction will help you put your refactoring on the product roadmap. The kind of data you need to collect and analyze highly depends on what you want to achieve or the current areas of improvement you see. Here are some examples:</p>
<ul>
<li><p><strong>Measuring Unplanned Work:</strong> Start by tracking the amount of time your team spends on unplanned work, such as bug fixes, hotfixes, and firefighting. Tools like Jira can help you gather this data. The "Accelerate" research suggests that a baseline of 15% unplanned work is a good target for high-performing organizations. If your team is spending significantly more time on unplanned work, it's a clear sign that technical debt is taking a toll.</p>
</li>
<li><p><strong>Calculating the Cost of Delay:</strong> Estimate the financial impact of delayed features or missed opportunities due to technical debt. This could involve calculating the potential revenue loss from a delayed product launch or the cost of losing customers to competitors due to slow feature development.</p>
</li>
<li><p><strong>Calculating the Cost of Failure:</strong> Issues with outages, scalability, or efficiency come with a cost. What is the potential revenue loss when your product discovery page is not responding or a response takes more than 3 seconds? What is your loss of engagement if teaser images can't be rendered in time?</p>
</li>
<li><p><strong>Assessing the Opportunity Cost:</strong> Consider the potential value of new features or products that your team could be developing if they weren't bogged down in managing technical debt. This can be a powerful way to illustrate the missed opportunities caused by technical debt.</p>
</li>
</ul>
<p>This is your chance to anticipate and counter common arguments against investing in refactoring, such as "We can't afford it" or "It's not a priority." Use data and real-world examples to demonstrate the long-term benefits of addressing technical debt. Highlight the potential cost of inaction, including lost revenue, decreased market share, and employee attrition.</p>
<h3 id="heading-step-3-pitch-your-initiative-using-the-language-of-your-business">Step 3: Pitch your initiative using the language of your Business</h3>
<p>Once you have a clear understanding of the business goals, you can start to align your refactoring efforts with them. Instead of arguing that your current solution does not follow SOLID or clean code principles, is poorly modularized, or is tightly coupled, show how reducing technical debt can directly contribute to achieving business goals. Some examples:</p>
<ul>
<li><p><strong>Faster time-to-market:</strong> By streamlining your development process and reducing the time spent on bug fixes and unplanned work, we can deliver new features and products to market faster, gaining a competitive edge. We expect that this will bring us 100,000 more users per month, worth $20 million per year in additional revenue.</p>
</li>
<li><p><strong>Improved customer satisfaction:</strong> By eliminating bugs and performance issues, we enhance the user experience and increase customer loyalty. This could improve retention by 5% and reduce customer churn by 25%.</p>
</li>
<li><p><strong>Increased innovation:</strong> By freeing up our developers from the burden of maintaining legacy code, we can empower them to explore new technologies and develop innovative solutions that drive business growth. This will remove current constraints and bottlenecks, making product features possible that were previously impossible to implement.</p>
</li>
</ul>
<p>As you know your business and your company best, you can be as concrete here as possible. Try to inspire your management by showing them the potential for the business that lies beneath the surface of your refactoring efforts. When presenting your case for refactoring to management, it's essential to speak the language of business. Avoid technical jargon and focus on the business implications of technical debt. Highlight the potential cost savings, revenue opportunities, and risk mitigation that refactoring can bring.</p>
<p>Trust me, this is how you get your management to listen, increasing the chance of getting strong buy-in from your management.</p>
<h3 id="heading-step-4-prioritize-and-measure-progress-and-results">Step 4: Prioritize and measure progress and results</h3>
<p>Congratulations! You've achieved buy-in from your management, and your technical refactoring is part of the roadmap for the upcoming two quarters. Now it's up to you to deliver measurable results. Measure and monitor the impact of your effort on unplanned work, customer satisfaction, or time-to-market of new features.</p>
<p>Depending on your argumentation and current situation, you might have several areas of improvement in mind. Prioritize the most critical areas of technical debt based on their impact on business goals and consider refactoring as an incremental and iterative process. It's unlikely that your initiatives to reduce technical debt will be the only item on your roadmap. Find a way to balance ongoing feature development while doing your refactoring.</p>
<p>I've seen teams implementing a mechanism I call "Technical Debt Rotation." On regular iterations, one engineer of the team will work on reducing technical debt while the others continue with feature development. After two iterations, they change roles within the team so that knowledge is shared. Other approaches focus on allocating defined time for all engineers to pay back technical debt, with <a target="_blank" href="https://blog.alexewerlof.com/p/tech-debt-day">great results</a>.</p>
<p>I also like the approach of the <a target="_blank" href="https://verraes.net/2020/01/wall-of-technical-debt/">Wall of Technical Debt</a> to make progress and increase transparency on your actions.</p>
<p>Whatever approach works for you: discuss, measure, and adapt. This will help you find solid argumentation for your next roadmap cycle when you have to update or revisit your current refactoring business case.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>I hope this <a target="_blank" href="https://cremich.cloud/series/technical-debt">two-part series</a> has shed light on the often-underestimated impact of technical debt. It's not just about messy code; it's about missed opportunities, frustrated developers, and a real threat to your business's bottom line: a potential $300 billion annual loss in global GDP due to developer inefficiency, and organizations wasting 23-42% of their development time on technical debt. These aren't just numbers on a page; they represent real-world consequences that can hinder your company's growth and success.</p>
<p>As technical leaders, we have a responsibility to make code quality a business concern. It's time to move beyond vague statements like "We need to refactor" and start building data-driven arguments that resonate with management. By quantifying the impact of technical debt and showcasing the potential ROI of refactoring, we can shift the conversation from a purely technical one to a strategic business discussion.</p>
<p>In this <a target="_blank" href="https://cremich.cloud/series/technical-debt">series</a>, we explained the hidden costs of technical debt, and its ripple effects on customers and the business, and touched upon the potential impact of AI coding assistants. We outlined a step-by-step guide to crafting a compelling business case for refactoring, empowering you to advocate for the changes your team needs to thrive.</p>
<p>Every modern business is a data business. Tackling technical debt is not just a matter of good coding practices; it's a data-driven decision that can unlock significant value for your organization. Take the insights from this <a target="_blank" href="https://cremich.cloud/series/technical-debt">series</a>, gather your data, and start building your case for refactoring today. Your team, your customers, and your business will thank you for it.</p>
]]></content:encoded></item><item><title><![CDATA[Don't let Technical Debt sabotage your product roadmap - Part 1]]></title><description><![CDATA[Introduction
Picture this: Your development team is humming along, shipping features at a breakneck pace. But beneath the surface, a looming threat is growing – technical debt, an invisible force that can derail your product roadmap and sabotage your...]]></description><link>https://cremich.cloud/dont-let-technical-debt-sabotage-your-product-roadmap</link><guid isPermaLink="true">https://cremich.cloud/dont-let-technical-debt-sabotage-your-product-roadmap</guid><category><![CDATA[technical-debt]]></category><category><![CDATA[refactoring]]></category><category><![CDATA[architecture]]></category><category><![CDATA[modernization]]></category><category><![CDATA[leadership]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Tue, 25 Jun 2024 11:59:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1719316743968/dff5d5c4-7bb2-4c00-a595-faf836a279d9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Picture this: Your development team is humming along, shipping features at a breakneck pace. But beneath the surface, a looming threat is growing – technical debt, an invisible force that can derail your product roadmap and sabotage your business model.</p>
<p>Technical debt, in simple terms, is the cost of taking shortcuts in software development, growing until the time comes to pay back your technical debt.</p>
<p>Imagine you're building a house. You could save time and money by cutting corners on the foundation. But eventually, those shortcuts will catch up with you. The house might start to creak and groan, the roof might leak, and the walls might start to crumble. The same thing happens with software. As technical debt accumulates, your development speed slows down, defects start to pile up, and your product roadmap becomes increasingly unpredictable.</p>
<p>What is the difference in the stories about your software and your house? While you can see the results of your shortcuts at your house, shortcuts in your software are invisible most of the time. Keeping technical debt invisible can be a costly mistake.</p>
<p>With technical debt, I do not mean the urge that we as engineers have to migrate to the latest cutting-edge frameworks or programming languages. With technical debt, I mean the loss of opportunities for growing your business.</p>
<p>In this <a target="_blank" href="https://cremich.cloud/series/technical-debt">blog series</a>, I'll cover how to build a compelling business case for refactoring to convince your management that investing in code quality is not just a technical concern but a strategic business imperative. Let us start by looking at the different perspectives on technical debt and the potential impact of AI Coding Assistants.</p>
<h2 id="heading-the-two-perspectives-on-technical-debt">The two perspectives on Technical Debt</h2>
<p>There are a lot of anecdotes, research, and studies out there. In this chapter, I will share insights from two great papers by CodeScene [<a target="_blank" href="https://codescene.com/hubfs/web_docs/Business-impact-of-code-quality.pdf">1</a>] [<a target="_blank" href="https://codescene.com/hubfs/calculate-business-costs-of-technical-debt.pdf">2</a>], mixed with my personal experiences and thoughts, as well as data from a Stripe study on developer efficiency [<a target="_blank" href="https://stripe.com/files/reports/the-developer-coefficient.pdf">3</a>]. Stripe shared some impressive numbers that developer inefficiency leads to a staggering $300 billion annual loss in global GDP. 🤯</p>
<p>Let me be clear on this: Technical debt is a business problem. However the lack of visibility about the impact of technical debt on your business results in short-term return on investment by implementing new features often being traded for long-term business growth opportunities by reducing technical debt. Only 10% of business managers actively manage technical debt. In my opinion, there are two approaches we can think of for tackling technical debt. The reactive approach is to create a business case for your refactorings. The proactive approach is to leverage the support of AI to deal with your refactoring. Both have a certain degree of uncertainty in common.</p>
<p>In any case, our first important milestone as technical leaders is creating awareness about the impact of technical debt.</p>
<h3 id="heading-a-look-inside-your-development-teams">A look inside your development teams</h3>
<p>Technical debt makes it increasingly difficult to add new features or modify existing ones. Codescene revealed that development in low-quality code is a staggering <strong>124% slower</strong> than in high-quality code. Imagine the frustration of your team when a feature that should take a week to implement ends up taking over two! This slowdown isn't just an inconvenience; it's a major drain on productivity and can significantly fill up your product backlog.</p>
<p>Defects due to low code quality and high technical debt translate into unplanned work – those dreaded bug fixes, hotfixes, and patches that eat away at your team's time and resources. The <a target="_blank" href="https://www.software.com/reports/future-of-work">“Future of Work”</a> report published by <a target="_blank" href="http://software.com">software.com</a> found out, that developers are just spending 60 minutes per weekday coding. The Stripe study found that developers spend an average of 17 hours per week dealing with maintenance issues and another 4 hours dealing with "bad code". According to Stripe, unplanned work represents a massive opportunity cost, estimated at $85 billion annually in lost developer productivity alone.</p>
<p>Unplanned work in this context means that your team's effort is spent firefighting instead of building new features or improving your product. This will sabotage your product roadmap by making it nearly impossible to craft your upcoming releases. This unpredictability is a nightmare for planning and can severely damage your credibility with stakeholders.</p>
<p>But why can’t we tackle the issue of technical debt by hiring new developers? That in fact can make the situation even worse as it increases coordination costs and can make software development less efficient. Especially in businesses that invested a lot in their technical debt.</p>
<p><a target="_blank" href="https://productdeveloper.net/cross-team-communication-avoid-dependencies/"><img src="https://productdeveloper.net/static/8da6e496cb29b90a66cf885dd592c92c/2a8be/lines-of-communication-and-team-size.jpg" alt="Cross-Team communication and the impact of dependencies" class="image--center mx-auto" /></a></p>
<h3 id="heading-the-ripple-effect">The Ripple Effect</h3>
<p>Technical debt extends far beyond the confines of your codebase, impacting your team's morale, your customers, and your business. I call this the ripple effect of technical debt. Not only since the publication of "Accelerate: Building and Scaling High Performing Technology Organizations," we understand that there is a science behind software engineering and DevOps. There is not only empirical but also scientific evidence of the impact of organizational performance on software delivery.</p>
<p>Depending on your product, your users have various expectations of the software described by all the functional and non-functional requirements, ultimately manifested in architectural characteristics. When technical debt rears its ugly head in the form of glitches, crashes, and performance issues, it impacts how users experience your products. This can lead to negative reviews, customer churn, and ultimately, lost revenue.</p>
<p>Picture this: if you need to convince your management about the impact of increasing bugs, behind every bug, there is an unhappy customer potentially turning away from your product, a negative review or app store rating at their fingertips. Goodbye to all new customers waiting for your features to be shipped. Goodbye business value and competitive advantages. When you consistently fail to deliver on promises, it erodes trust and can jeopardize your product roadmap.</p>
<p>Technical debt can harm innovation and hinder your ability to capitalize on new opportunities. This can put you at a competitive disadvantage and prevent you from staying ahead of the curve.</p>
<p>Let's finally not forget the human cost of technical debt. Working with messy, convoluted code can be incredibly frustrating for developers. Developers feel their productivity is hindered by factors like legacy systems, unclear prioritization, and insufficient time to fix bad code.</p>
<p>When talented engineers leave your team due to technical debt-induced frustration, you lose valuable knowledge and experience, further exacerbating the problem. According to Stripe, executives consider the lack of developer talent a bigger threat to their business than access to capital. That alone should be enough to argue that technical debt is not just a technical issue but a critical business risk. But often, this is not the case.</p>
<h2 id="heading-the-impact-of-ai-coding-assistants-on-technical-debt">The Impact of AI Coding Assistants on Technical Debt</h2>
<p>Can AI not only help us write new code, but can it also help us improve code, or make code quality better? While tools like Amazon Q Developer or GitHub Copilot promise to boost productivity and streamline coding tasks, their impact on technical debt is a complex and evolving issue.</p>
<p>AI coding assistants have the potential to help engineers pay back technical debt. By automating repetitive tasks, suggesting improvements, and even generating entire code snippets, these tools could free up developers to focus on higher-level design and problem-solving. However, the reality is more nuanced.</p>
<p>I tried to understand better at which stage we are right now. And I found an interesting whitepaper from GitClear named "Coding on Copilot". They evaluated that in 30% of cases, an AI Assistant failed to improve code health. And in two-thirds of the cases, it broke existing tests. AI coding assistants tend to hallucinate and write even more code than engineers by hand. Codebases that are partially written with AI, grow faster than our codebases used to grow previously.</p>
<p>And we all know: that more code does not mean better code. More code introduces a higher risk of increasing technical debt. Coming back to my definition of technical debt as a shortcut in software development. To be fair enough, AI Coding Assistants are not the root cause. It is because we learned over the years the benefits of simply copy-pasting code from StackOverflow or AI-Coding Assistants as a shortcut in software development. You might have an idea how this ends unless you use those tools responsibly.</p>
<p>As we move forward, it's crucial to explore how AI can best support refactoring efforts. This includes developing AI models that can better understand code structure and domain context, as well as integrating AI assistants into existing development workflows in a way that promotes code quality and maintainability.</p>
<p>There is an interesting podcast published by ThoughtWorks called <a target="_blank" href="https://www.thoughtworks.com/en-de/insights/podcasts/technology-podcasts/refactoring-with-ai">"Refactoring with AI"</a>. Martin Fowler asked an interesting question:</p>
<blockquote>
<p>Is there any work at trying to do this kind of stuff with these kinds of models that operate on the level of the abstract syntax tree, rather than the code text itself?</p>
</blockquote>
<p>Adam Tornhill, CTO and Founder of CodeScene explained that they tried such approaches using machine learning models that use the abstract syntax tree instead of treating coding as text. I understand the potential of mitigating hallucinations as this approach erases the difference between programming languages and offers coding assistants a new way of dealing with code.</p>
<p>Another trend that I see is that behind every coding assistant, there is not only one model that executes all tasks. While diving deeper into Amazon Q Developer, I learned that there are multiple models evaluated when you send your prompt. Based on the evaluation, Amazon Q Developer picks the one that is best for executing your task.</p>
<p>Ultimately, the successful use of AI coding assistants in managing technical debt will depend on the expertise and judgment of human developers. These tools can be powerful allies, but they should not be seen as a replacement for sound software engineering practices.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>I hope this part helps you to find enough arguments and anecdotes to emphasize that technical debt is a business problem.</p>
<p>Research shows that developers spend 23-42% of their time wasting due to technical debt. Why is this still tolerated in many cases? The lack of visibility and missing connection to business value are two reasons that I observed in my years working in this industry. Discussions around tackling technical debt are often reduced to statements like "We have to refactor this" without an argumentation based on data.</p>
<p>Technical leaders need to understand that they have the power to make code quality a business concern. As every modern business is a data business, tackling technical debt is, in similar ways, a data-driven decision like every other business decision.</p>
<p>In the <a target="_blank" href="https://cremich.cloud/step-by-step-business-case-refactoring">next part</a>, we will explore how to create a compelling business case for your refactoring so that your management can make an informed decision on how to trade long-term business growth back from short-term ROI.</p>
]]></content:encoded></item><item><title><![CDATA[My "Aha!" Moment with Amazon Q]]></title><description><![CDATA[Introduction
Have you ever used Amazon Q and gotten different results based on whether you were asking from the AWS console or within your IDE? That's not a glitch in the matrix. That is an intention. That is how applications behave that are baked by...]]></description><link>https://cremich.cloud/my-aha-moment-with-amazon-q</link><guid isPermaLink="true">https://cremich.cloud/my-aha-moment-with-amazon-q</guid><category><![CDATA[AWS]]></category><category><![CDATA[Amazon Q]]></category><category><![CDATA[generative ai]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Wed, 20 Mar 2024 09:06:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1710417486250/3a57d5f3-8a52-429a-a691-27558fb7fd16.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Have you ever used Amazon Q and gotten different results based on whether you were asking from the AWS console or within your IDE? That's not a glitch in the matrix. That is an intention. That is how applications behave that are baked by large language models (LLMs). Amazon Q is more than just a single tool; it's like multiple Personas with distinct personalities. Understanding this is key to unlocking the full power of this AI-driven assistant.</p>
<p>Let's be clear, I'm not talking about differences in the fancy visual interface. I'm talking about how Amazon Q responds, the kind of code it generates, and even how it troubleshoots problems. Why does this matter? Getting the most out of Amazon Q means knowing which Persona you're talking to.</p>
<p>In this blog post, I'll break down the different Personas of Amazon Q and how those personalities change depending on where you interact with the service. You'll learn why grasping this concept is key, making Amazon Q a powerful tool in your cloud development toolbox.</p>
<h2 id="heading-amazon-qs-flavors"><strong>Amazon Q's "Flavors"</strong></h2>
<p>Think of each integration point as setting the stage for one of Amazon Q's Persona to shine. Each has a specialty and a way of interacting that you'll need to recognize for the best results. Here's a look at what you're likely to encounter:</p>
<ul>
<li><p><strong>The Guide (Management Console):</strong> When you find Amazon Q within the <a target="_blank" href="https://docs.aws.amazon.com/amazonq/latest/aws-builder-use-ug/q-on-aws.html">AWS management console</a>, expect this Persona to be high-level and focused on guidance. Need help understanding error messages? Want a walkthrough of setting up a new service? This Persona will provide links and break complex tasks down into manageable steps.</p>
</li>
<li><p><strong>The Coding Wizard (IDE):</strong> Inside your <a target="_blank" href="https://docs.aws.amazon.com/amazonq/latest/aws-builder-use-ug/q-in-IDE.html">Integrated Development Environment</a>, Amazon Q gets down to business. Ask for a code snippet, and it might just generate one. Need to refactor a messy bit of logic? This Persona can suggest cleaner alternatives directly within your code editor.</p>
</li>
<li><p><strong>The Service Specialist (AWS Service Integrations):</strong> When interacting with Amazon Q <a target="_blank" href="https://docs.aws.amazon.com/amazonq/latest/aws-builder-use-ug/q-and-aws-services.html">directly within AWS services</a> (like Glue, Quicksight, and others), it gains specialized knowledge. Expect answers tailored to the service in question, troubleshooting tips, and deep insights into how that service functions.</p>
</li>
</ul>
<p><strong>My Pro Tip:</strong> It's tempting to think you can ask any question of Amazon Q across all touchpoints and get the perfect solution. That's rarely the case. The Persona you're talking to shapes the answer.</p>
<p><strong>Personal Experience Snippet</strong></p>
<p>Early on, I asked Amazon Q from both the management console and my IDE:</p>
<blockquote>
<p>Please create a cloudformation template to host a website in my AWS account.</p>
</blockquote>
<p>What you will get in the AWS Management console is guidance on the steps required to configure the infrastructure to host your website on AWS. In your IDE you get an AWS Cloudformation template to host your website on AWS that you can modify and iterate on. This was my lightbulb moment – Amazon Q isn't just context-aware, it's like having three different assistants ready to tackle different parts of cloud development.</p>
<p>I expect that those capabilities will continue to diverge in the future. Because each integration has its own USPs. Amazon Q running in the AWS Management console might know what runs in your AWS account. The IDE integration knows what you build and plan to run in your AWS Account. Different context. Different behaviors. Different customer roles.</p>
<h2 id="heading-so-what-lets-connect-some-dots"><strong>So What? Let’s connect some dots.</strong></h2>
<p>Amazon Q has different personalities – that's neat, but why should you care as a developer, as a cloud engineer, as a solution architect? Here is why!</p>
<h3 id="heading-the-heart-of-amazon-q-is-a-llm">The heart of Amazon Q is a LLM</h3>
<p>Amazon Q is powered by Amazon Bedrock and the LLMs it provides. It is not important to know, what LLMs are used by the service. But keeping those personas in mind and knowing how this LLM game is played, you might find a relation to a concept called "<a target="_blank" href="https://community.aws/content/2dJmYpKlFNh6NOeC71GIZWZkfST">System Prompt</a>".</p>
<blockquote>
<p>A system prompt is a way to provide context, instructions, and guidelines to Claude before presenting it with a question or task. By using a system prompt, you can set the stage for the conversation, specifying [...] role, personality, tone, or any other relevant information that will help it better understand and respond to the user's input (<a target="_blank" href="https://docs.anthropic.com/claude/docs/system-prompts">https://docs.anthropic.com/claude/docs/system-prompts</a>)</p>
</blockquote>
<p>Remember my words before: "The Persona you're talking to shapes the answer.". System prompts are one example of how to achieve this.</p>
<p>Remember, LLMs aren't truly intelligent. They excel at predicting the next word, based on the training data, the prompt, and the context it gets. This ability can lead to hallucinations – seemingly correct but misleading answers. System prompts and implemented guardrails help minimize the risk of such disinformation.</p>
<p>How to phrase questions and provide context to Amazon Q becomes vital for success. Think about it as a specialized tool within the AWS ecosystem. Communities and documentation will play a huge role as we all discover the sweet spots for interacting with it.</p>
<p>It remains your job as a human to double-check code snippets, validate suggestions, and use a critical lens. <strong>While a powerful assistant, Amazon Q is not a replacement for your critical thinking.</strong> Amazon Q can be incredible but don't ditch studying AWS service documentation and FAQs. Q draws knowledge from these sources, and understanding the fundamentals will improve the quality of your conversation with it.</p>
<h3 id="heading-safety-first"><strong>Safety First</strong></h3>
<p>Unlike with open-ended tools like ChatGPT where YOU have fine-grained control over the persona, Amazon Q is carefully designed with guardrails so that AWS keeps a level of control to protect YOU as a customer. It seems like AWS has made a calculated trade-off, prioritizing customer safety and accuracy within the context of cloud development. You might hit situations where it says <em>"I apologize, your request seems outside my domain of expertise"</em>. While this could feel restrictive, it emphasizes AWS's "Job Zero" commitment to securing their customers.</p>
<p><a target="_blank" href="https://aws.amazon.com/machine-learning/responsible-ai/policy/">AWS's Responsible AI Policy</a> sheds light on why Amazon Q behaves this way. Key principles like safety, fairness, and transparency influence the design decisions behind such a tool and the overall user experience. While some flexibility might be sacrificed, this focus aligns with the responsible use of AI in a high-stakes domain like cloud infrastructure.</p>
<p>It is not just about helpful answers. It is about the responsible use of AI. This includes <a target="_blank" href="https://docs.aws.amazon.com/bedrock/latest/userguide/abuse-detection.html">automated abuse detection</a> implemented into Amazon Q. You find this as general notes on a lot of documentation pages related to Amazon Q. These are mechanisms to flag potentially harmful content or requests that violate AWS's <a target="_blank" href="https://aws.amazon.com/aup/">Acceptable Use Policy</a>. In addition to the guardrails we discussed, there's a proactive system ensuring the safety and ethical use of Amazon Q at scale. This could include:</p>
<ul>
<li><p>Attempts to generate malicious code or scripts</p>
</li>
<li><p>Queries designed to expose sensitive information</p>
</li>
<li><p>Content that promotes hate speech or discrimination</p>
</li>
<li><p>Requests that violate AWS's Acceptable Use Policy</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Amazon Q isn't meant to think for you. Think of it as a set of specialized AI assistants ready to support different aspects of your cloud journey - focusing on safety and accuracy.</p>
<p>Users interacting with Amazon Q might not have profound experience in prompt engineering. From the perspective of Amazon Q that is a trade-off to value safety for customers more than providing hallucinated answers to all questions.</p>
<p>My biggest challenge with Amazon Q: finding relevant prompt templates and incorporating the best practices I learned so far in handling conversational interfaces. The fact that AWS sets the persona and context behind the scenes is a protection feature. I don't want Amazon Q to provide me with recipes, speak like Yoda, or explain me Quantum computing as a 6-year-old child. The context is set and I can directly engage with Amazon Q.</p>
<p>What are your experiences so far? Did you stumble upon some killer prompt templates or new insights, don't be a stranger! Connect with me on <a target="_blank" href="https://www.linkedin.com/in/christian-bonzelet/">LinkedIn</a> for more discussions on all things cloud and AI.</p>
]]></content:encoded></item><item><title><![CDATA[Following the path of Architecture as Code]]></title><description><![CDATA[Introduction
Architecture as Code (AaC) is gaining momentum as a powerful paradigm for building cloud applications. AaC allows us to model our cloud architecture using familiar programming languages and tools, focusing on the intention of a solution ...]]></description><link>https://cremich.cloud/following-the-path-of-architecture-as-code</link><guid isPermaLink="true">https://cremich.cloud/following-the-path-of-architecture-as-code</guid><category><![CDATA[architecture as code]]></category><category><![CDATA[AWS]]></category><category><![CDATA[event-driven-architecture]]></category><category><![CDATA[AWS EventBridge]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Sun, 10 Mar 2024 20:18:49 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1709929111108/e517b61c-654a-4e17-88f3-1f58f9220b66.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p><a target="_blank" href="https://architectelevator.com/cloud/iac-architecture-as-code/">Architecture as Code (AaC)</a> is gaining momentum as a powerful paradigm for building cloud applications. AaC allows us to model our cloud architecture using familiar programming languages and tools, focusing on the intention of a solution ideally described with a pattern language.</p>
<p>Time-tested patterns like <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/">Enterprise-Integration-Patterns</a> provide a structured vocabulary for describing the flow of messages and events within distributed systems. By implementing EIPs we establish a ubiquitous language that transcends specific technologies, enabling clear communication and collaboration across teams.</p>
<p>In this blog post, I want to provide a practical example of my attempt to use AaC to build modern cloud applications on AWS. I leverage the AWS CDK (Cloud Development Kit) as my Infrastructure-as-Code tool and demonstrate how EIPs are embodied in custom constructs for seamless integration flows.</p>
<h2 id="heading-building-a-ubiquitous-language"><strong>Building a Ubiquitous Language</strong></h2>
<p>One of the most powerful aspects of EIPs is the ability to bridge communication gaps between technical and non-technical stakeholders. By employing a shared language, we can collaborate on system designs and ensure that solutions align with technical and business needs. In my specific use case, the EIPs and their implementations we'll discover are:</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Enterprise Integration Pattern</td><td>AWS Service Implementation</td></tr>
</thead>
<tbody>
<tr>
<td>A <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageBus.html"><strong>Message Bus</strong></a> performs as a Middleware between applications that enable them to work together using messaging.</td><td>Amazon EventBridge’s <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-event-bus.html">event bus</a> serves as the foundation of our integration. It provides a central channel for routing events between producers and consumers.</td></tr>
<tr>
<td>A <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/Filter.html"><strong>Message Filter</strong></a> eliminates undesired messages from a channel based on a set of criteria.</td><td>EventBridge <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rules.html">Rules</a> allow us to selectively route events based on specific criteria, ensuring that only relevant events reach downstream processing components.</td></tr>
<tr>
<td><a target="_blank" href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/DataEnricher.html"><strong>Content-Enricher</strong></a> accesses external data sources to augment a message with missing information.</td><td>An <a target="_blank" href="https://docs.aws.amazon.com/lambda/latest/dg/welcome.html">AWS Lambda function</a> acts as a Content-Enricher. It processes events, fetches additional details, and appends this data to the original event.</td></tr>
<tr>
<td>A <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/RecipientList.html">Recipient List</a> inspects incoming messages, determines the list of recipients, and forwards messages to all channels associated with the recipients in the list.</td><td>EventBridge Rules establish a Recipient List to route events to a defined set of targets, directing them to appropriate downstream services.</td></tr>
</tbody>
</table>
</div><p>Next, let's take a closer look into a practical example of enriching events from a managed AWS service, where we'll demonstrate how these patterns come to life using <a target="_blank" href="https://docs.aws.amazon.com/prescriptive-guidance/latest/aws-cdk-layers/layer-3.html">custom CDK constructs</a>.</p>
<h2 id="heading-a-practical-example-enriching-aws-service-events"><strong>A Practical Example: Enriching AWS Service Events</strong></h2>
<p>I want to add detailed information about a Transcription job from Amazon Transcribe to events emitted by the service, making downstream processing more streamlined and efficient. Typical use cases could be, to forward the generated transcription text for a human review.</p>
<p>Here is a sample event that you receive on the default event bus from Amazon Transcribe. It follows a typical <a target="_blank" href="https://martinfowler.com/articles/201701-event-driven.html">event notification style</a> containing the name of the Transcription job to be used by an event consumer to fetch additional details.</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"version"</span>: <span class="hljs-string">"0"</span>,
    <span class="hljs-attr">"id"</span>: <span class="hljs-string">"event ID"</span>,
    <span class="hljs-attr">"detail-type"</span>:<span class="hljs-string">"Transcribe Job State Change"</span>,
    <span class="hljs-attr">"source"</span>: <span class="hljs-string">"aws.transcribe"</span>,
    <span class="hljs-attr">"account"</span>: <span class="hljs-string">"111122223333"</span>,
    <span class="hljs-attr">"time"</span>: <span class="hljs-string">"timestamp"</span>,
    <span class="hljs-attr">"region"</span>: <span class="hljs-string">"us-west-2"</span>,
    <span class="hljs-attr">"resources"</span>: [ ],
    <span class="hljs-attr">"detail"</span>: {
          <span class="hljs-attr">"TranscriptionJobName"</span>: <span class="hljs-string">"my-first-transcription-job"</span>,
          <span class="hljs-attr">"TranscriptionJobStatus"</span>: <span class="hljs-string">"COMPLETED"</span> (or <span class="hljs-string">"FAILED"</span>)
    }   
}
</code></pre>
<h2 id="heading-solution-design"><strong>Solution Design</strong></h2>
<p>The following diagram visualizes my initial designed data flow:</p>
<ol>
<li><p>Amazon Transcribe emits events about Transcription job state changes onto the default EventBridge event bus, acting as a Message Bus pattern implementation.</p>
</li>
<li><p>An EventBridge Rule filters relevant events from Amazon Transcribe and routes them to an AWS Lambda function, representing a sequence of a Message Filter and a Recipient List pattern.</p>
</li>
<li><p>The AWS Lambda function serves as the Content-Enricher pattern. It receives the initial event from Amazon Transcribe, fetches additional details about the Transcription job, and appends these details to the event.</p>
</li>
<li><p>The enriched event is sent to a custom EventBridge bus via a Lambda Destinations channel.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1709926925695/39359e19-a85d-4cea-9d65-f469732f2167.png" alt class="image--center mx-auto" /></p>
<p>I liked the term flow hence I made this an important part of my implementation and ubiquitous language. I started defining an interface providing consumers with features to design integration flows.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> Flow {
  withFilter(filter: events.EventPattern): Flow;
  withEnricher(code: lambda.AssetCode, policies?: iam.PolicyStatement[]): Flow;
  withEventTarget(eventTarget: events.IEventBus): <span class="hljs-built_in">void</span>;
}
</code></pre>
<h2 id="heading-aws-cdk-implementation">AWS CDK Implementation</h2>
<p>The <code>Flow</code> interface promotes a declarative approach to codify integrations. Consumers specify the desired interactions between components using a fluent syntax (<code>withFilter()</code>, <code>withEnricher()</code>, etc.). The flow for our example will then look like the following snippet. It seamlessly ties together the messaging concepts from our solution design using an expressive interface. I wanted to provide a clean interface for my integration logic while hiding service-specific implementation details and promoting a pattern-focused approach.</p>
<pre><code class="lang-typescript">source
  .flow(<span class="hljs-string">"TranscribeContentEnrichment"</span>)
  .withFilter(
    source: [<span class="hljs-string">"aws.transcribe"</span>],
    detailType: [<span class="hljs-string">"Transcribe Job State Change"</span>],
  })
  .withEnricher(
    lambda.Code.fromAsset(
      path.join(__dirname, <span class="hljs-string">"enricher.lambda.ts"</span>),
    ),
    [
      <span class="hljs-keyword">new</span> iam.PolicyStatement({
        resources: [<span class="hljs-string">"*"</span>],
        actions: [<span class="hljs-string">"transcribe:GetTranscriptionJob"</span>],
      }),
    ],
  )
  .withEventTarget(target);
</code></pre>
<h3 id="heading-data-flow">Data Flow</h3>
<p>The concept of a <code>Flow</code> must adhere to service-specific implementation details. The EventBridge-specific implementation encapsulates the complexity of configuring EventBridge rules including filters and targets, and integrating the Content-Enricher Lambda function. This allows a user to focus on the design of a flow itself rather than cracking service-specific implementation details.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> EventBridgeFlowProps {
  eventSource: events.IEventBus;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> EventBridgeFlow <span class="hljs-keyword">extends</span> Construct <span class="hljs-keyword">implements</span> Flow {
  <span class="hljs-keyword">readonly</span> rule: events.Rule;
  enricher?: lambda.Function;

  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: EventBridgeFlowProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id);
    <span class="hljs-built_in">this</span>.rule = <span class="hljs-keyword">new</span> events.Rule(scope, <span class="hljs-string">"Rule"</span>, {
      eventBus: props.eventSource,
    });
  }

  withFilter(filter: events.EventPattern): EventBridgeFlow {
    <span class="hljs-built_in">this</span>.rule.addEventPattern(filter);
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>;
  }

  withEnricher(code: lambda.AssetCode, policies?: iam.PolicyStatement[]) {
    <span class="hljs-built_in">this</span>.enricher = <span class="hljs-keyword">new</span> lambda.Function(<span class="hljs-built_in">this</span>.rule, <span class="hljs-string">"Enricher"</span>, {
      handler: <span class="hljs-string">"index.lambda"</span>,
      code,
      runtime: <span class="hljs-keyword">new</span> lambda.Runtime(LAMBDA_RUNTIME, lambda.RuntimeFamily.NODEJS),
    });

    policies?.forEach(<span class="hljs-function">(<span class="hljs-params">p</span>) =&gt;</span> <span class="hljs-built_in">this</span>.enricher?.addToRolePolicy(p));

    <span class="hljs-built_in">this</span>.rule.addTarget(<span class="hljs-keyword">new</span> targets.LambdaFunction(<span class="hljs-built_in">this</span>.enricher));
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>;
  }

  withEventTarget(eventTarget: events.IEventBus) {
    <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.enricher) {
      <span class="hljs-keyword">new</span> lambda.EventInvokeConfig(<span class="hljs-built_in">this</span>, <span class="hljs-string">"EnricherEventTarget"</span>, {
        <span class="hljs-function"><span class="hljs-keyword">function</span>: <span class="hljs-title">this</span>.<span class="hljs-title">enricher</span>,
        <span class="hljs-title">onSuccess</span>: <span class="hljs-title">new</span> <span class="hljs-title">destinations</span>.<span class="hljs-title">EventBridgeDestination</span>(<span class="hljs-params">eventTarget</span>),
      })</span>;
    } <span class="hljs-keyword">else</span> {
      <span class="hljs-built_in">this</span>.rule.addTarget(<span class="hljs-keyword">new</span> targets.EventBus(eventTarget));
    }
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>;
  }
}
</code></pre>
<h3 id="heading-messagebus">MessageBus</h3>
<p>The <code>MessageBus</code> construct is another pattern implementation, abstracting event bus creation and configuration. It further provides the <code>flow(id: string)</code> method, which acts as a factory method for defining a new integration flow specific to Amazon EventBridge.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> MessageBusProps {
  <span class="hljs-keyword">readonly</span> name: <span class="hljs-built_in">string</span>;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">interface</span> IMessageBus {
  <span class="hljs-keyword">readonly</span> eventBus: events.IEventBus;
  flow(id: <span class="hljs-built_in">string</span>): Flow;
}

<span class="hljs-keyword">export</span> <span class="hljs-keyword">class</span> MessageBus <span class="hljs-keyword">implements</span> IMessageBus {
  eventBus: events.IEventBus;

  <span class="hljs-keyword">constructor</span>(<span class="hljs-params">scope: Construct, id: <span class="hljs-built_in">string</span>, props: MessageBusProps</span>) {
    <span class="hljs-built_in">super</span>(scope, id);

    <span class="hljs-built_in">this</span>.eventBus = <span class="hljs-keyword">new</span> events.EventBus(<span class="hljs-built_in">this</span>, <span class="hljs-string">"CustomEventBus"</span>, {
      eventBusName: props.name,
    });
  }

  <span class="hljs-keyword">public</span> flow(id: <span class="hljs-built_in">string</span>): Flow {
    <span class="hljs-keyword">return</span> <span class="hljs-keyword">new</span> EventBridgeFlow(<span class="hljs-built_in">this</span>, id, {
      eventSource: <span class="hljs-built_in">this</span>.eventBus,
    });
  }
}
</code></pre>
<h3 id="heading-content-enricher">Content Enricher</h3>
<p>The Lambda function just calls the <a target="_blank" href="https://docs.aws.amazon.com/transcribe/latest/APIReference/API_GetTranscriptionJob.html">Amazon Transcribe API to fetch details from a transcription job</a>. By using Lambda Destinations, it is decoupled from any messaging or infrastructure logic. The AWS Lambda service itself takes care to route the response of our function to another event bus that we can model with a <code>MessageBus</code> construct.</p>
<pre><code class="lang-typescript"><span class="hljs-keyword">const</span> lambdaHandler = <span class="hljs-keyword">async</span> (
  event: EventBridgeEvent&lt;<span class="hljs-string">"Transcribe Job State Change"</span>, TranscribeJobStateChanged&gt;,
): <span class="hljs-built_in">Promise</span>&lt;EventBridgeEvent&lt;<span class="hljs-string">"Transcribe Job State Change"</span>, TranscriptionJob&gt;&gt; =&gt; {
  <span class="hljs-keyword">const</span> transcriptionJob = <span class="hljs-keyword">await</span> transcribeClient.getTranscriptionJob({
    TranscriptionJobName: event.detail.TranscriptionJobName,
  });

  <span class="hljs-keyword">return</span> {
    ...event,
    detail: { ...transcriptionJob.TranscriptionJob },
  };
};
</code></pre>
<p>Messaging-related implementations are handled by our <code>Flow</code> implementation as you see in the following snippets as part of our <code>Flow</code> implementation. A <code>Flow</code> that contains a content-enricher implemented as an AWS Lambda function, creates an <a target="_blank" href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-eventinvokeconfig.html"><code>EventInvokeConfig</code></a> Lambda Destination implementation. If the flow does not contain a content-enricher, it adds the event target to the underlying rule implementation.</p>
<pre><code class="lang-typescript">withEventTarget(eventTarget: events.IEventBus) {
    <span class="hljs-keyword">if</span> (<span class="hljs-built_in">this</span>.enricher) {
      <span class="hljs-keyword">new</span> lambda.EventInvokeConfig(<span class="hljs-built_in">this</span>, <span class="hljs-string">"EnricherEventTarget"</span>, {
        <span class="hljs-function"><span class="hljs-keyword">function</span>: <span class="hljs-title">this</span>.<span class="hljs-title">enricher</span>,
        <span class="hljs-title">onSuccess</span>: <span class="hljs-title">new</span> <span class="hljs-title">destinations</span>.<span class="hljs-title">EventBridgeDestination</span>(<span class="hljs-params">eventTarget</span>),
      })</span>;
    } <span class="hljs-keyword">else</span> {
      <span class="hljs-built_in">this</span>.rule.addTarget(<span class="hljs-keyword">new</span> targets.EventBus(eventTarget));
    }
    <span class="hljs-keyword">return</span> <span class="hljs-built_in">this</span>;
  }
</code></pre>
<div data-node-type="callout">
<div data-node-type="callout-emoji">💡</div>
<div data-node-type="callout-text">For now, the method signature of <code>withEventTarget</code> restricts us from using anything else than an Amazon EventBridge event bus as a target. Multiple other valid targets like AWS Step Functions, Amazon SQS, or Amazon SNS can be modeled at a later stage supporting more integration use cases.</div>
</div>

<p>Running our configured flow, we now see that the enriched event contains all the details of a Transcription job enveloped by the AWS Lambda Destination integration as part of the response payload.</p>
<pre><code class="lang-json">{
    <span class="hljs-attr">"version"</span>: <span class="hljs-string">"0"</span>,
    <span class="hljs-attr">"id"</span>: <span class="hljs-string">"a7dbf591-191f-0820-f780-d9dea822c9fc"</span>,
    <span class="hljs-attr">"detail-type"</span>: <span class="hljs-string">"Lambda Function Invocation Result - Success"</span>,
    <span class="hljs-attr">"source"</span>: <span class="hljs-string">"lambda"</span>,
    <span class="hljs-attr">"detail"</span>: {
        <span class="hljs-attr">"responsePayload"</span>: {
            <span class="hljs-attr">"version"</span>: <span class="hljs-string">"0"</span>,
            <span class="hljs-attr">"id"</span>: <span class="hljs-string">"...."</span>,
            <span class="hljs-attr">"detail-type"</span>: <span class="hljs-string">"Transcribe Job State Change"</span>,
            <span class="hljs-attr">"source"</span>: <span class="hljs-string">"aws.transcribe"</span>,
            <span class="hljs-attr">"account"</span>: <span class="hljs-string">"..."</span>,
            <span class="hljs-attr">"time"</span>: <span class="hljs-string">"2024-03-08T20:46:35Z"</span>,
            <span class="hljs-attr">"region"</span>: <span class="hljs-string">"eu-central-1"</span>,
            <span class="hljs-attr">"resources"</span>: [],
            <span class="hljs-attr">"detail"</span>: {
                <span class="hljs-attr">"CompletionTime"</span>: <span class="hljs-string">"2024-03-08T20:46:35.342Z"</span>,
                <span class="hljs-attr">"CreationTime"</span>: <span class="hljs-string">"2024-03-08T20:45:57.014Z"</span>,
                <span class="hljs-attr">"LanguageCode"</span>: <span class="hljs-string">"en-GB"</span>,
                <span class="hljs-attr">"Media"</span>: {
                    <span class="hljs-attr">"MediaFileUri"</span>: <span class="hljs-string">"s3://.../videos/3811065_DAL_5387406.mp4"</span>
                },
                <span class="hljs-attr">"MediaFormat"</span>: <span class="hljs-string">"mp4"</span>,
                <span class="hljs-attr">"MediaSampleRateHertz"</span>: <span class="hljs-number">48000</span>,
                <span class="hljs-attr">"ModelSettings"</span>: {
                    <span class="hljs-attr">"LanguageModelName"</span>: <span class="hljs-string">"en-gb"</span>
                },
                <span class="hljs-attr">"Settings"</span>: {
                    <span class="hljs-attr">"ChannelIdentification"</span>: <span class="hljs-literal">false</span>,
                    <span class="hljs-attr">"ShowAlternatives"</span>: <span class="hljs-literal">false</span>,
                    <span class="hljs-attr">"VocabularyName"</span>: <span class="hljs-string">"95f6b2b4-d779-44f9-9f76-25fe3e69c7bf"</span>
                },
                <span class="hljs-attr">"StartTime"</span>: <span class="hljs-string">"2024-03-08T20:45:57.049Z"</span>,
                <span class="hljs-attr">"Subtitles"</span>: {
                    <span class="hljs-attr">"Formats"</span>: [
                        <span class="hljs-string">"vtt"</span>
                    ],
                    <span class="hljs-attr">"SubtitleFileUris"</span>: [
                        <span class="hljs-string">"https://.../transcriptions/95f6b2b4-d779-44f9-9f76-25fe3e69c7bf/transcript-1.vtt"</span>
                    ]
                },
                <span class="hljs-attr">"Transcript"</span>: {
                    <span class="hljs-attr">"TranscriptFileUri"</span>: <span class="hljs-string">"https://.../transcriptions/95f6b2b4-d779-44f9-9f76-25fe3e69c7bf/transcript-1.json"</span>
                },
                <span class="hljs-attr">"TranscriptionJobName"</span>: <span class="hljs-string">"95f6b2b4-d779-44f9-9f76-25fe3e69c7bf-1"</span>,
                <span class="hljs-attr">"TranscriptionJobStatus"</span>: <span class="hljs-string">"COMPLETED"</span>
            }
        }
    }
}
</code></pre>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Throughout this blog post, you explored how I used the AWS CDK to implement my Architecture as Code. Three takeaways from my exploration:</p>
<ul>
<li><p><strong>Ubiquitous Language:</strong> EIPs provide a common vocabulary to describe integrations, bridging communication gaps, and promoting collaboration.</p>
</li>
<li><p><strong>Finding good abstraction is hard:</strong> Only thought-through L3 constructs raise the level of abstraction when defining cloud architectures. They embody patterns, encapsulate AWS-specific details, and promote a declarative, configuration-focused approach. Finding the right level of abstraction when defining infrastructure with code is not an easy task. Adopting AaC requires fluency in programming, infrastructure as well as integration concepts. That is a learning curve of its own.</p>
</li>
<li><p><strong>AWS CDK for the win:</strong> The expressiveness of the AWS CDK and the fact that you can model your infrastructure as real code opens the world for Architecture as Code. You can switch perspectives by communicating intentions instead of service selections.</p>
</li>
</ul>
<p>I encourage you to give it a try in your projects. How are you using Architecture as Code in your AWS projects? Did I find a good abstraction for my integration concerns? Let me know and share your experiences in the comments or connect with me on <a target="_blank" href="https://www.linkedin.com/in/christian-bonzelet/">LinkedIn</a>.</p>
<p>Let's continue the conversation!</p>
]]></content:encoded></item><item><title><![CDATA[Building with purpose]]></title><description><![CDATA[This article was written in collaboration with Lukas Müller - Manager Solutions Architect at Amazon Web Services.
Misaligned development teams are your kryptonite in an organization full of superheroes. It is not about the latest cutting-edge framewo...]]></description><link>https://cremich.cloud/building-with-purpose</link><guid isPermaLink="true">https://cremich.cloud/building-with-purpose</guid><category><![CDATA[architect]]></category><category><![CDATA[cloud architecture]]></category><category><![CDATA[business]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Mon, 04 Dec 2023 07:00:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1701443869688/99abac10-d739-4c6c-9511-4150c4383e95.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>This article was written in collaboration with</em> <a target="_blank" href="https://www.linkedin.com/in/lumuell/"><em>Lukas Müller</em></a> <em>- Manager Solutions Architect at Amazon Web Services.</em></p>
<p>Misaligned development teams are your kryptonite in an organization full of superheroes. It is not about the latest cutting-edge framework – it is about our customers, our business, and how good and fast we can solve their problems. Werner Vogels introduced in his <a target="_blank" href="https://www.youtube.com/watch?v=UTRBVPvzt9w">re:Invent keynote 2023</a> the fitting term of the <a target="_blank" href="https://thefrugalarchitect.com/">frugal architect</a>. Being frugal starts not with measuring costs but earlier. It starts with understanding the business.</p>
<p>In this article, we will describe that being a frugal architect is more than just designing and architecting technical solutions. Based on a fictional but typical scenario you will learn</p>
<ul>
<li><p>why defining business outcomes and understanding business value has to come first</p>
</li>
<li><p>how a frugal architect uses feedback channels for technical and business decisions</p>
</li>
</ul>
<h2 id="heading-a-request-from-the-cto">A request from the CTO</h2>
<p>Meet Samantha, the visionary CTO of a bustling tech company. Samantha's mind is always buzzing with ideas to drive innovation. One day, she contacted Alex, a talented Lead Engineer, through their Slack channel.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1689251134380/b1ef804d-7037-474d-97ac-c39d7edd19ef.gif?auto=format,compress&amp;gif-q=60&amp;format=webm" alt="A chat between Alex a lead engineer and Samantha his CTO. Samantha asks Alex if he has time to join a meeting to plan a new data lake." /></p>
<p>How can the story between Alex and Samantha continue?  Let us explore two alternative realities. What if Alex acts wasteful? What if he acts frugal?</p>
<p><a target="_blank" href="https://www.youtube.com/watch?v=UTRBVPvzt9w">Let’s jump back into the matrix!</a></p>
<h2 id="heading-alex-the-wasteful-architect">Alex, the wasteful architect</h2>
<p>Alex proceeds headfirst with the data lake project without questioning its purpose or considering its potential impact on the company's business metrics. They gathered a team of five engineers and allocated a budget of $200,000 for the project. Over four months, the team built the infrastructure, including data storage, automated data processing, and a user-friendly interface.</p>
<p>All the allocated resources and budget are exhausted by the project, but the company does not witness any substantial benefits.  Unaware of the potential benefits, the different teams neglected to fully use the new data infrastructure and stuck to their daily routines. The sales team still struggles with forecasting accuracy, and the marketing team continues to face difficulties in measuring campaign effectiveness. The overall impact on the company's growth and efficiency was marginal, and the potential value of the data solution remained largely untapped.</p>
<p>Once finished, all tasks in the team’s Kanban board are moved to "<strong>done</strong>". The project reached its end and everything turned back to business as usual. The data lake collects a lot of data without data consumer teams actively using the data in their day-to-day work and decision-making.</p>
<p><strong>Observations:</strong></p>
<ol>
<li><p><strong>Lack of clear objectives:</strong> The absence of clear objectives and a focus on customer problems and business outcomes led to a costly misadventure. Alex is happy by moving another task to “done” and feels comfortable in building what was requested: a data lake. Although numerous people congratulated <strong>Samantha</strong> at the release party, she was not able to measure and prove if and how the data lake helped to become a more data-driven company.</p>
</li>
<li><p><strong>Missing customer-centricity:</strong> The project team missed collaborating with potential data consumer teams to understand their pain points and goals. Resulting in a data lake that is not actively used. Stakeholders are getting nervous as the initial budget of $200.000 was wasted. Additional OPEX costs will stress future budgeting to keep the data lake running.</p>
</li>
<li><p><strong>Focus on output instead of outcome:</strong> Because of a lack of understanding of desired outcomes and goals, the project has become an expensive failure, eroding trust in future data-driven initiatives. So far, the only measurable impact is the waste of time, budget, and resources.</p>
</li>
</ol>
<h2 id="heading-alex-the-frugal-architect">Alex, the frugal architect</h2>
<p>Alex's approach to the data lake project takes on a different shape. <strong>He understands the importance of key performance indicators (KPIs) in driving success</strong>. Alex recognizes the need to lay a solid foundation aligned with business goals.</p>
<p>Alex organizes a workshop <strong>involving key stakeholders</strong> from sales, marketing, and finance teams. The purpose of this workshop is to <strong>gain a deep understanding of their pain points and goals</strong>. Receiving buy-in from everyone involved proves to be no easy task. Samantha initially seeks a quick solution, and some team members express skepticism and pushback on Alex's approach, demanding tangible results.</p>
<p>Alex recognizes the <strong>value of stakeholder buy-in and the importance of defining clear KPIs</strong>. Through a series of one-on-one meetings, Alex poses thought-provoking questions to stakeholders, such as</p>
<blockquote>
<p><em>"Imagine you could measure everything: What would you measure?"</em></p>
</blockquote>
<p>This gives invaluable insights into the perspectives and needs of stakeholders. It fosters a sense of understanding and collaboration, as Alex encourages them to define the metrics that matter most to their respective departments, <strong>emphasizing the need for clear, measurable goals to drive the project's success.</strong></p>
<p>The stakeholders from sales, marketing, and finance teams collectively identify three important business outcomes:</p>
<ul>
<li><p>a 50% reduction in data retrieval time until the end of the year,</p>
</li>
<li><p>a 30% improvement in marketing campaign analysis efficiency three months after launch, and</p>
</li>
<li><p>a 90% accuracy rate for revenue trend predictions six months after launch.</p>
</li>
</ul>
<p>Alex compiles a written summary outlining the potential benefits of a successful data lake <strong>implementation backed by the business</strong>. In addition, Alex conducts a "back on the napkin" cost-benefit analysis, clearly demonstrating that the six-month project, involving a core team of 12 people and an allocated budget of $300,000, will yield substantial returns through increased efficiency and tangible business outcomes.</p>
<p>With the CTO's approval and the limited budget secured, Alex proceeds to work closely with stakeholders. Single-threaded leaders are defined, and communication channels are established to share progress updates, gather feedback, and address blockers during the development process.</p>
<p><strong>Observations:</strong></p>
<ol>
<li><p><strong>Business Outcomes and KPIs as Proof of Success:</strong> The architect's focus on business outcomes and KPIs enables them to look beyond immediate tasks and foresee the impact of their work. By prioritizing the right goals, the business ensures that the project aligns with the broader objectives of the organization. By focusing on KPIs, <a target="_blank" href="https://www.youtube.com/watch?v=UTRBVPvzt9w&amp;t=3280s">we give control over technology decisions back to the business.</a></p>
</li>
<li><p><strong>Involvement of stakeholders and iterative collaboration:</strong> The architect recognizes the value of involving stakeholders throughout the project. Iterating and learning from each other, fosters a sense of shared ownership and knowledge transfer, leading to a more successful outcome.</p>
</li>
<li><p><strong>Understanding the role of delivering the right thing:</strong> The architect understands that the role goes beyond simply delivering a product or solution. Architects embrace the backbone of the project, unafraid to dive into the business context and support decision-making to support business goals and long-term success.</p>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Being a frugal architect is about understanding business outcomes, speaking the language of business, building systems that evolve with business models, and aligning development efforts towards a common business goal.</p>
<p>This article emphasized the importance of business outcomes and understanding business value across all organizational sectors.</p>
<p>Defining clear goals and measurable KPIs helps development teams to make informed decisions. Be aware you will trigger fears just by discussing KPIs. Your best conversational skills, empathy, and the ability to connect with others will help to drive change.</p>
<p>We invite you to share your thoughts and experiences on this topic in the comment section below. How have you aligned your work with business goals? What strategies have you found effective in explaining the importance of business alignment to developers? Let's continue the conversation and learn from each other.</p>
<p>Now go build ... with purpose.</p>
]]></content:encoded></item><item><title><![CDATA[Scaling content delivery while saving costs?]]></title><description><![CDATA[In the domain of media and entertainment, every byte of data and every millisecond of latency counts. As professionals in this space, we're not just delivering content; we're crafting experiences. And while we strive for excellence in quality, we're ...]]></description><link>https://cremich.cloud/scaling-content-delivery-while-saving-costs</link><guid isPermaLink="true">https://cremich.cloud/scaling-content-delivery-while-saving-costs</guid><category><![CDATA[Amazon Cloudfront]]></category><category><![CDATA[cost-optimisation]]></category><category><![CDATA[content delivery network ]]></category><category><![CDATA[AWS]]></category><category><![CDATA[CDN]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Thu, 12 Oct 2023 11:15:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1697108652242/a7dffad0-eed8-49c5-b80f-4faa44b0c59a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the domain of media and entertainment, every byte of data and every millisecond of latency counts. As professionals in this space, we're not just delivering content; we're crafting experiences. And while we strive for excellence in quality, we're also constantly on the lookout for ways to optimize costs. After all, a well-architected cloud solution isn't just about performance and scalability—it's also about financial efficiency.</p>
<p><a target="_blank" href="https://aws.amazon.com/cloudfront/">Amazon CloudFront</a>, AWS's global content delivery network is an essential service for media &amp; entertainment, offering a global reach with its extensive network of edge locations. But with great power comes... well, costs. The good news? There are several strategies and features within CloudFront designed specifically to help you save money. This guide will walk you through the nuances of CloudFront's pricing, from the basics of the Free Tier to the intricacies of custom pricing. So, grab a coffee, settle in, and let's dive into cost savings with Amazon CloudFront.</p>
<h2 id="heading-how-aws-charges-for-cloudfront"><strong>💰 How AWS Charges for CloudFront?</strong></h2>
<p>When it comes to cloud services, understanding the pricing model is half the battle. And with Amazon CloudFront, it's no different. At its core, CloudFront's pricing is a reflection of the service's versatility and global reach. But, as with any service, the more you know about its pricing intricacies, the better equipped you are to make cost-effective decisions.</p>
<p>At a high level, AWS charges for CloudFront based on several factors. The most obvious in terms of data transfer are:</p>
<ol>
<li><p><strong>Data Transfer Out Rates</strong>: This is the cost associated with the amount of data that CloudFront delivers to your viewers. It's important to note that these rates vary depending on the geographic region of your viewers.</p>
</li>
<li><p><strong>HTTP/HTTPS Request Rates</strong>: Every time a viewer makes a request (be it HTTP or HTTPS), there's a charge. Again, these rates differ based on the viewer's region</p>
</li>
</ol>
<p>If you want to dive deeper into the nuances of Cloudfront pricing, please check out the <a target="_blank" href="https://aws.amazon.com/cloudfront/pricing/">official pricing page</a>.</p>
<p>One of the unique aspects of CloudFront's pricing is its variability. The cost isn't just about how much data you're transferring or how many requests you're handling. It's also about where your viewers are located. AWS has divided the world into different regions, and each region has its own set of rates for both data transfer and requests.</p>
<p>For instance, data transfer out rates for viewers in North America or Europe might differ from rates for viewers in Asia or South America. This regional variability is something to keep in mind, especially if your media content has a global audience.</p>
<h2 id="heading-diving-into-the-free-tier">🆓 Diving into the Free Tier</h2>
<p>Ah, the Free Tier. It's like the appetizer before the main course, giving you a taste of what's to come without the commitment. For those new to AWS or those wanting to experiment with CloudFront without immediately incurring costs, the Free Tier is a godsend.</p>
<p>Amazon CloudFront's Free Tier is not just a marketing gimmick; it's an offering that can provide significant value, especially when you're in the initial stages of setting up or testing your media delivery.</p>
<p>Here's what you get with the Free Tier:</p>
<ol>
<li><p>1TB <strong>Data Transfer Out Each Month</strong>: This is a generous amount, especially for small to medium-sized projects or for those in the testing phase. It allows you to deliver content to your viewers without incurring any costs for the first 1TB each month.</p>
</li>
<li><p>10 <strong>Million HTTP/HTTPS Requests Each Month</strong>: Again, this is a substantial number. For many websites or applications in their early stages, this can cover a significant portion, if not all, of their monthly traffic.</p>
</li>
<li><p>2,000,000 CloudFront Function invocations per month</p>
</li>
<li><p>Free SSL certificates</p>
</li>
</ol>
<p>It's essential to note that the Free Tier benefits last for 12 months from the time you create your AWS account. After this period, standard CloudFront charges apply.</p>
<p>The Free Tier is not just about saving money (though that's a big part of it). It's also about learning, experimenting, and iterating. It provides a risk-free environment to test your media delivery, understand CloudFront's features, and optimize your setup before scaling up.</p>
<p>Moreover, for startups or individual content creators in the media &amp; entertainment space, every penny counts. The Free Tier can be a financial relief, allowing you to allocate resources to other critical areas while still delivering a top-notch viewer experience.</p>
<h2 id="heading-price-classes-tailoring-your-costs"><strong>🌐 Price Classes: Tailoring Your Costs</strong></h2>
<p>In the landscape of media delivery, not all regions are created equal. Some areas might be more expensive to deliver content to, while others might be more cost-effective. CloudFront's Price Classes are designed to give you control over where your content is delivered from, allowing you to strike a balance between cost and performance.</p>
<p>At its core, <a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PriceClass.html">Price Classes</a> allow you to specify which of CloudFront's global edge locations you want your content to be served from. By default, CloudFront aims to minimize latency by delivering content from its entire global network. However, this might mean you're paying more to deliver content to certain regions where AWS's costs are higher.</p>
<p>Here's a breakdown:</p>
<ol>
<li><p><strong>Price Class All (Default)</strong>: This uses all of CloudFront's global edge locations, ensuring the lowest latency but potentially higher costs.</p>
</li>
<li><p><strong>Price Class 200</strong>: This excludes regions of South America, Austria, and New Zealand, offering a balance between cost and performance.</p>
</li>
<li><p><strong>Price Class 100</strong>: This further narrows down the edge locations to only North America, Europe, and Israel, focusing on a cost-effective delivery but potentially higher latency for some users.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697108625381/48d9daea-d9e5-423a-a57d-6091bf7b5f67.png" alt="Amazon Cloudfront Price classes" class="image--center mx-auto" /></p>
<p>Choosing the right Price Class is a strategic decision. Here are some considerations:</p>
<ul>
<li><p><strong>Audience Geography</strong>: Where is the bulk of your audience located? If most of your viewers are in regions covered by Price Class 100 or 200, then opting for one of these might make sense.</p>
</li>
<li><p><strong>Quality of Experience</strong>: Are you willing to compromise slightly on latency for certain users to save on costs? If yes, then a more restrictive Price Class might be the way to go.</p>
</li>
<li><p><strong>Budget Constraints</strong>: If you're working with a tight budget, especially in the early stages of a project, opting for a more cost-effective Price Class can be a smart move.</p>
</li>
</ul>
<p>Imagine you're delivering high-definition video content primarily to viewers in North America and Europe. By choosing <strong>Price Class 100</strong>, you can ensure optimal delivery to these regions while saving on costs by excluding more expensive regions. However, if you have a growing viewer base in Asia, you might need to evaluate if the cost savings outweigh the potential increase in latency for these users.</p>
<h2 id="heading-saving-bundles-more-than-just-cost-saving"><strong>🛡️ Saving Bundles: More Than Just Cost-Saving</strong></h2>
<p>In the quest to optimize costs, CloudFront's <a target="_blank" href="https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/savings-bundle.html">Saving Bundles</a> emerge as a compelling option. But it's not just about the savings; it's about enhancing security while keeping costs in check.</p>
<p>The CloudFront security savings bundle is a blend of cost-saving and enhanced security. When you opt for this bundle, you're not just committing to a consistent monthly amount; you're also getting credits for AWS WAF, a web application firewall that fortifies your CloudFront distribution against common web threats.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1697109307345/40cf4b76-919b-4b2c-a29b-96863662407c.png" alt="Amazon Cloudfront saving bundles" class="image--center mx-auto" /></p>
<p>Here's a breakdown:</p>
<ol>
<li><p><strong>Commitment</strong>: By purchasing a savings bundle, you agree to a fixed monthly amount for CloudFront for one year. This commitment ensures you have a predictable budget, and in return, you get credits that offset your CloudFront charges.</p>
</li>
<li><p><strong>Up to 30% Discount</strong>: The value of these credits can result in up to a 30% discount on CloudFront's standard pricing. It's like getting premium service at a discounted rate.</p>
</li>
<li><p><strong>AWS WAF Credits</strong>: In addition to CloudFront credits, you receive credits for AWS WAF. This can offset up to 10% of the monthly CloudFront commitment, providing an added layer of security without additional costs.</p>
</li>
</ol>
<p>Let's say your typical CloudFront charges amount to $600 per month. By committing to $420 each month for a year (a 30% reduction), CloudFront provides you with $600 worth of credits monthly. In essence, you're paying $420 for services worth $600. Plus, you get an additional $42 in AWS WAF credits. Over a year, this can lead to substantial savings.</p>
<p>Key points you should consider using saving bundles:</p>
<ul>
<li><p><strong>Credits Apply Account-Wide</strong>: These credits aren't restricted to a specific distribution. They apply across all CloudFront usage in your AWS account.</p>
</li>
<li><p><strong>Credits Cover All CloudFront Usage</strong>: Whether it's data transfer charges, request charges, or Lambda@Edge charges, the credits offset all types of CloudFront usage.</p>
</li>
<li><p><strong>Unused Credits</strong>: Remember, credits are use-it-or-lose-it. If you don't utilize all the credits in a billing period, they don't roll over to the next.</p>
</li>
<li><p><strong>Exceeding Credit Amount</strong>: If your usage surpasses the available credits, you'll be billed the difference at standard rates.</p>
</li>
</ul>
<h2 id="heading-custom-pricing-for-the-heavy-hitters"><strong>💼 Custom Pricing: For the Heavy Hitters</strong></h2>
<p>In media &amp; entertainment, scale is often the name of the game. As your content reaches a broader audience and your traffic surges, standard pricing models might not always be the most economical. That's where Custom Pricing steps in, offering tailored solutions for those with substantial data transfer needs.</p>
<p>Custom Pricing isn't for everyone. It's designed for users who are ready to commit to a minimum of 10 TB of data transfer per month for at least 12 months. If you fit this bill, here's what's in store:</p>
<ol>
<li><p><strong>Tailored Discounts</strong>: The discounts vary based on the volume of your commitment. The more you commit, the better the rates you can secure.</p>
</li>
<li><p><strong>Organization-Wide Application</strong>: If you're managing multiple AWS accounts within an organization, the custom pricing applies across the board. This ensures consistent savings, irrespective of which account is handling the traffic.</p>
</li>
</ol>
<p>Why you should consider custom pricing?</p>
<ul>
<li><p><strong>Predictability</strong>: With custom pricing, you're entering a commitment. This means you have a clear forecast of your costs, allowing for better budgeting and financial planning.</p>
</li>
<li><p><strong>Economies of Scale</strong>: As your traffic grows, the per-unit cost of delivery can decrease with custom pricing, ensuring that your success doesn't lead to disproportionate costs.</p>
</li>
<li><p><strong>Negotiation Power</strong>: Custom pricing discussions with AWS give you a platform to negotiate terms based on your specific needs and projected growth.</p>
</li>
</ul>
<p>If you believe Custom Pricing is right for you, the first step is to reach out to AWS. The process involves discussions to understand your requirements, after which AWS provides a tailored pricing proposal. Remember, the emphasis here is on partnership. AWS understands the challenges of delivering high-quality media content at scale and is often willing to work closely with users to find the best pricing solution.</p>
<h2 id="heading-conclusion">🚀 Conclusion</h2>
<p>Navigating AWS pricing can sometimes feel like charting a course through uncharted waters. But with the right knowledge and tools at your disposal, you can ensure smooth sailing. Amazon CloudFront, with its cost-saving options, offers media &amp; entertainment professionals to deliver top-notch content without breaking the bank.</p>
<p>From the Free Tier's generous offerings for newcomers to the tailored solutions of Custom Pricing for the big players, there's a cost-saving strategy for everyone. With features like Price Classes and Saving Bundles, you have the flexibility to fine-tune your expenses based on your specific needs and audience demographics.</p>
<p>But remember, while cost-saving is essential, it's just one piece of the puzzle. The ultimate goal is to deliver exceptional experiences to your audience, and CloudFront provides the tools to achieve that without compromising on quality.</p>
<p>As you continue your journey in the cloud, always keep an eye out for new features and pricing options. AWS is continually evolving, and there might be new opportunities around the corner to optimize both performance and costs. And for those intricate details or when in doubt, the <a target="_blank" href="https://aws.amazon.com/documentation/cloudfront/">official AWS documentation</a> is an invaluable resource.</p>
<p>Here's to building in the cloud, crafting exceptional viewer experiences, and making every penny count!</p>
]]></content:encoded></item><item><title><![CDATA[AWS re:Invent uncovered]]></title><description><![CDATA[Ah, AWS re:Invent! It's not just another tech conference; it's THE tech conference. A sprawling, bustling hub of innovation, learning, and networking, re:Invent is where the AWS world converges every year. If this is your maiden voyage into the vast ...]]></description><link>https://cremich.cloud/aws-reinvent-uncovered</link><guid isPermaLink="true">https://cremich.cloud/aws-reinvent-uncovered</guid><category><![CDATA[AWS]]></category><category><![CDATA[reInvent]]></category><category><![CDATA[guide]]></category><category><![CDATA[conference]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Thu, 21 Sep 2023 12:14:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695298468440/ac866fcc-8b50-4cc1-9421-537396b08e35.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ah, AWS re:Invent! It's not just another tech conference; it's <strong>THE</strong> tech conference. A sprawling, bustling hub of innovation, learning, and networking, re:Invent is where the AWS world converges every year. If this is your maiden voyage into the vast sea of re:Invent, you might be feeling a mix of excitement and, let's admit it, a tad bit of overwhelm. But fear not! Just as every seasoned traveler once took their first step, every re:Invent pro was once in your shoes (hopefully, comfortable ones, but we'll get to that).</p>
<p>The sheer scale and depth of re:Invent can be daunting. With countless sessions, expos, and networking events spread across multiple venues, it's easy to feel like a kid in a candy store, eyes wide, not knowing where to start. But here's the good news: with a bit of preparation and some insider tips, you can navigate this colossal event like a pro, soaking in the knowledge, making meaningful connections, and yes, snagging some cool swag along the way.</p>
<p>So let's uncover the secrets of AWS re:Invent together and set you on a path to make the most of this incredible experience.</p>
<h2 id="heading-planning-is-everything">🗓️ Planning is everything</h2>
<p>The early bird doesn't just get the worm; it gets the best sessions, the optimal seats, and a smoother overall experience. Think of re:Invent as a vast amusement park. Without a map and a plan, you might end up wandering aimlessly, missing out on the best rides.</p>
<ol>
<li><p><strong>Start Early with the Session Catalog:</strong> Weeks before the event kicks off, AWS releases a detailed session catalog. This is your treasure map. Dive into it, explore the myriad sessions, and mark your favorites. Whether you're into deep technical dives, visionary keynotes, or hands-on workshops, there's something for everyone. But remember, the most sought-after sessions fill up fast. So, once the session reservation window opens, be swift to secure your spot.</p>
</li>
<li><p><strong>The Walk-In Strategy:</strong> Missed out on reserving a seat for that session you were eyeing? Don't fret. Every session has walk-in lines. But here's the catch: they can get long, especially for popular sessions. If you're determined not to miss out, ensure you allocate enough time to queue up. Arriving early can make the difference between being part of the action inside or hearing about it later.</p>
</li>
</ol>
<h2 id="heading-the-importance-of-comfort"><strong>👟 The Importance of Comfort</strong></h2>
<p>Imagine this: It's day three of re:Invent. You've attended back-to-back sessions, explored the expo, and networked like a champ. But there's a nagging pain in your feet, and every step feels like a marathon. Don't let this be you!</p>
<ol>
<li><strong>Choose Your Footwear Wisely:</strong> AWS re:Invent is colossal. And when I say colossal, I mean it. The venues are expansive, and the events are spread out. You'll be walking—a lot. While those stylish shoes might look fantastic, they might not be your best friend by the end of the day. Opt for comfort over style. Trust me, your feet will thank you.</li>
</ol>
<h2 id="heading-hop-onhop-off-is-good-for-sightseeing-not-for-reinvent">🏨 Hop-on/Hop-off is good for sightseeing not for re:Invent</h2>
<p>Las Vegas is a city of grandeur, and the venues for re:Invent are no exception. They're vast, they're opulent, and they're... well, quite far from each other. Navigating between them can be a bit of a trek, especially if you're hopping from one hotel to another for different sessions.</p>
<ol>
<li><p><strong>Avoid the Hotel Hop:</strong> While it might be tempting to jump between hotels to catch various sessions, it's a time-consuming endeavor. The distances between hotels can be deceptive, and even with the shuttle services, you might find yourself spending more time in transit than you'd like. Check the session catalog: many popular sessions are repeated at different venues throughout the week. If you can, try to cluster your sessions by location each day. It'll save you time and energy.</p>
</li>
<li><p><strong>Shuttle Services Are Your Friend:</strong> If you do need to switch venues, make the most of the shuttle services provided. They're efficient, regular, and a great way to move between locations without the hassle of navigating Vegas traffic.</p>
</li>
</ol>
<h2 id="heading-keep-your-schedule-flexible"><strong>⏰ Keep Your Schedule Flexible</strong></h2>
<p>While it's essential to plan, it's equally crucial to leave some room for spontaneity. AWS re:Invent isn't just about the sessions; it's about the community, the unexpected conversations, and those serendipitous moments that can spark new ideas or friendships. Seasoned attendees call this “the magic of the hallway track”.</p>
<ol>
<li><p><strong>Embrace Community and Networking Events:</strong> Beyond the official sessions, re:Invent is teeming with community-led events, meetups, and networking opportunities. These are goldmines for making connections, sharing experiences, and even having a bit of fun. Keep an eye on the re:Invent page and session catalog for these events. Whether it's a casual meetup at a local bar or a more formal networking dinner, these events can be some of the most rewarding parts of your re:Invent experience.</p>
</li>
<li><p><strong>Take a Breath:</strong> It's easy to get caught up in the whirlwind of back-to-back sessions and events. But remember, it's okay to take a step back. Schedule some downtime. Whether it's a leisurely coffee break, a stroll around the venue, or just some quiet time to process what you've learned, these moments can be incredibly refreshing and give you the energy to dive back in with renewed vigor.</p>
</li>
</ol>
<h2 id="heading-decipher-session-types">🎤 D<strong>ecipher Session Types</strong></h2>
<p>AWS re:Invent is a smorgasbord of learning opportunities, and not all sessions are created equal. Understanding the different session types can help you tailor your experience to your learning style and objectives.</p>
<ol>
<li><p><strong>Beyond the Breakout:</strong> While breakout sessions are informative, they're often recorded and available for viewing post-event on platforms like YouTube. If you're looking for a more interactive experience, consider other formats.</p>
</li>
<li><p><strong>Chalk Talks &amp; Builder Sessions:</strong> These are smaller, more intimate settings where AWS experts dive deep into specific topics. The beauty of these sessions? They're interactive. You can ask questions, engage in discussions, and get feedback on your specific challenges.</p>
</li>
<li><p><strong>Code Talks - The New Kid on the Block:</strong> Making its debut this year, Code Talks promises to be an exciting addition. Tailored for developers, these sessions are all about diving deep into code, exploring best practices, and getting hands-on with AWS services.</p>
</li>
</ol>
<p>Remember, the goal is to maximize your learning. Choose sessions that align with your interests, offer interactive opportunities, and provide value beyond the event itself.</p>
<h2 id="heading-pick-the-right-session-levels"><strong>📊 Pick The Right Session Levels</strong></h2>
<p>AWS re:Invent caters to a diverse audience, from cloud novices to seasoned experts. The sessions are categorized into different levels to help attendees choose the right fit for their expertise and interests.</p>
<p><strong>Know Your Levels:</strong></p>
<ul>
<li><p><strong>Level 100:</strong> These are introductory sessions, providing a broad overview of a topic. Ideal for those new to AWS or a specific service.</p>
</li>
<li><p><strong>Level 200:</strong> Intermediate sessions that delve a bit deeper, offering a more detailed look at specific AWS services or solutions.</p>
</li>
<li><p><strong>Level 300:</strong> Advanced sessions, perfect for those with a good grasp of AWS. They dive deep into specific topics, often involving complex architectures and solutions.</p>
</li>
<li><p><strong>Level 400:</strong> The deep dive. These are for the pros, covering intricate details, best practices, and advanced architectures.</p>
</li>
</ul>
<p><strong>Tailor Your Experience:</strong> If you're like me, with multiple production workloads already humming on AWS, you might find levels 100 and 200 less enlightening. They're great for building foundational knowledge, but if you're looking for advanced insights, aim for levels 300 and 400. However, if a topic is entirely new to you, don't shy away from starting at a lower level to build a solid understanding.</p>
<p>The key is to strike a balance. Mix and match session levels based on your familiarity with the topics and where you want to deepen your knowledge.</p>
<h2 id="heading-the-art-of-balancing-time"><strong>⚖️ The Art of Balancing Time</strong></h2>
<p>AWS re:Invent is not just a conference; it's an experience. And like any grand experience, it's essential to find a rhythm that allows you to soak in the knowledge, connect with peers, and also take moments for yourself.</p>
<p><strong>Pause and Reflect:</strong> With so much information coming your way, it's easy to feel overwhelmed. Schedule pockets of time to pause, reflect, and process what you've learned. Whether it's jotting down notes, discussing with peers, or simply taking a quiet moment to think, these breaks can enhance your understanding and retention.</p>
<p><strong>Listen to Your Body:</strong> It's a marathon, not a sprint. While the excitement can keep you going, it's essential to listen to your body. If you're feeling drained, it's okay to skip a session or take a longer break. Remember, the goal is to leave re:Invent enriched, not exhausted.</p>
<h2 id="heading-nutrition-and-hydration"><strong>🥤 Nutrition and Hydration</strong></h2>
<p>Amidst the whirlwind of sessions, keynotes, and networking, it's easy to overlook the basics: eating well and staying hydrated. But remember, re:Invent is a marathon, and you'll need to fuel your body and mind to keep going.</p>
<p><strong>Stay Energized:</strong> With the adrenaline rush of the event, you might be tempted to skip meals or grab quick, less-nutritious options. But to stay sharp and attentive, it's crucial to nourish yourself with balanced meals. Thankfully, re:Invent offers a variety of food options. Whether you're grabbing a lunch box from the catering or exploring local eateries, prioritize meals that give you sustained energy.</p>
<p><strong>Hydration is Key:</strong> Las Vegas can be dry, and with all the walking and talking, you'll need to stay hydrated. Don't rely solely on caffeine (tempting, I know). Make use of the numerous water dispensers scattered throughout the venue. Pro tip: During your re:Invent check-in, you'll receive a refillable bottle. Keep it handy and refill it regularly. It's an eco-friendly way to ensure you're always hydrated.</p>
<p><strong>Take Breaks:</strong> Amidst sessions, find spots to relax and enjoy a snack or a drink. One of my favorite places is outside the Caesars Forum. It's a great spot to catch some fresh air, bask in the sun, and recharge before diving back into the action.</p>
<h2 id="heading-explore-the-expo"><strong>🎪 Explore the Expo</strong></h2>
<p>The expo at re:Invent is a microcosm of the cloud computing universe. With a plethora of vendors showcasing cutting-edge products, solutions, and innovations, it's a must-visit for every attendee.</p>
<p><strong>Dive into the AWS Ecosystem:</strong> The expo is not just about flashy booths and swag (though there's plenty of that!). It's a chance to dive deep into the AWS ecosystem, explore new tools, and discover solutions that can elevate your cloud game.</p>
<p><strong>Engage with AWS Experts:</strong> The AWS Village is a gem within the expo. Here, you'll find dedicated booths for almost every AWS service or group of services. It's a golden opportunity to engage with AWS experts, ask questions, clarify doubts, and even provide feedback or feature requests.</p>
<p><strong>Connect with the Community:</strong> Don't miss out on the AWS Heroes Lounge and the AWS Community Lounge. These spaces are buzzing with energy, offering a chance to interact with AWS Heroes, join DevChat sessions, and immerse yourself in the vibrant AWS community.</p>
<h2 id="heading-the-allure-of-reinvent-swag"><strong>🎁 The Allure of re:Invent Swag</strong></h2>
<p>Ah, swag! It's one of those delightful perks of attending tech conferences, and re:Invent takes it to a whole new level. From quirky t-shirts to innovative gadgets, there's a treasure trove of goodies waiting for you.</p>
<p><strong>Be sustainable:</strong> While it's tempting to grab everything in sight, be selective. Think about what you'll genuinely use or cherish. Some attendees even bring an extra bag just for swag, but remember, quality over quantity. Be sustainable!</p>
<p><strong>Eco-Friendly Choices:</strong> Many vendors are now offering sustainable swag options. Whether it's a reusable water bottle, eco-friendly tote bags, or bamboo tech accessories, make choices that are kind to our planet.</p>
<p><strong>Share the Love:</strong> If you end up with swag that's not quite your style or duplicates, consider sharing with colleagues, friends, or even donating. It's a great way to spread the re:Invent spirit beyond the event.</p>
<h2 id="heading-final-words">🚀 Final words</h2>
<p>As we draw this guide to a close, I hope these tips offer you a clearer path to navigate the vast landscape of AWS re:Invent. Remember, while the sessions, keynotes, and workshops are invaluable, it's the connections you make, the conversations you have, and the experiences you gather that truly define your re:Invent journey.</p>
<p>If you haven't already, now's the time to take the plunge. Register for AWS re:Invent, mark your calendar, and gear up for an unforgettable cloud adventure. Whether you're there to deepen your technical knowledge, network with like-minded professionals, or simply soak in the vibrant atmosphere, re:Invent promises a transformative experience.</p>
<p><strong>Ready to dive into the world's biggest tech conference?</strong> <a target="_blank" href="https://reinvent.awsevents.com/register/"><strong>Register for AWS re:Invent now</strong></a> and set yourself up for a week of learning, networking, and inspiration.</p>
]]></content:encoded></item><item><title><![CDATA[The generative AI adoption spectrum]]></title><description><![CDATA[Generative AI dominates not only my social media feeds but also daily talks and discussions. Working in the industry of media & entertainment, this is not only here a huge topic. Besides my excitement about this new technology, I recognized an intere...]]></description><link>https://cremich.cloud/the-generative-ai-adoption-spectrum</link><guid isPermaLink="true">https://cremich.cloud/the-generative-ai-adoption-spectrum</guid><category><![CDATA[generative ai]]></category><category><![CDATA[architecture]]></category><category><![CDATA[cloud architect]]></category><category><![CDATA[business strategy]]></category><category><![CDATA[Software Engineering]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Mon, 31 Jul 2023 12:27:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1690832484096/959e9b8d-c0ac-4119-bf18-66f20e7590c1.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Generative AI dominates not only my social media feeds but also daily talks and discussions. Working in the industry of media &amp; entertainment, this is not only here a huge topic. Besides my excitement about this new technology, <strong>I recognized an interesting change in feelings when I think about all things generative AI</strong> - and to be honest, I never felt this before. A good time for self-reflection and writing down my thoughts to share but also for self-therapy.</p>
<p>Let me directly conclude: <strong>It is the first time in my tech career, that I felt a certain fear of technology.</strong></p>
<p>Writing down my thoughts helped me to look at the matter objectively.</p>
<h2 id="heading-a-natural-fear-of-becoming-irrelevant">A natural fear of becoming irrelevant</h2>
<p>Understanding and reading about how the technology around generative AI will change the way how we deal with <strong>technology scares me of becoming irrelevant.</strong> It will change the way how we build applications and solutions. It will change the way how people use the software. And that increases in me the fear of not being able to keep up or becoming irrelevant.</p>
<p>A few words <a target="_blank" href="https://www.linkedin.com/in/christian-bonzelet">about me</a>. I am 39 years old and started my tech career early, resulting in almost 20 years of experience in software development and IT. Now working as an AWS Solutions Architect at DFL Digital Sports in Germany. I have gone through some transformations from Web 1.0 to Web 2.0. From on-premise to virtualization to cloud. Although these transitions also had a huge impact, I never faced these changes with uncertainty, fear, or insecurity.</p>
<p>I had to remember my first instructor during my apprenticeship as an IT professional. He had a hard time adapting and understanding object-oriented programming. For me, this was bread and butter as I learned it from scratch in school. For him - having a mainframe and sequential programming background - this felt like a whole new world that was not as easily accessible as it was for me. <a target="_blank" href="https://reichental.medium.com/fear-may-be-our-friend-in-a-future-of-ai-f73442d03ebe">His natural reaction was fear of becoming irrelevant</a>.</p>
<h2 id="heading-business-value-is-not-a-perpetual-motion">Business value is not a perpetual motion</h2>
<p>In my business context, a lot of people talking about generative AI and using this easily accessible technology to show some cool demos or proof of concepts. People without any kind of technical background. But isn't this good? In some kind, yes. But what scares me, are fictitious discussions like this:</p>
<div data-node-type="callout">
<div data-node-type="callout-emoji">👨‍🚀</div>
<div data-node-type="callout-text"><strong>Colleague:</strong> “Have you seen GenAI tool XYZ? Look at this. I built an awesome &lt;insert random feature&gt; here with just a few clicks.”</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">👨‍💻</div>
<div data-node-type="callout-text"><strong>Architect</strong>: “Oh yeah that is cool and impressive. Great work.”</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">👨‍🚀</div>
<div data-node-type="callout-text"><strong>Colleague: </strong>“Oh man…If I can do this, we don’t need developers anymore”.</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">👨‍💻</div>
<div data-node-type="callout-text"><strong>Architect</strong>: “That is interesting. Is this already running in production?”</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">👨‍🚀</div>
<div data-node-type="callout-text"><strong>Colleague: </strong>“No…just running here in my free tier account of GenAI tool XYZ.”</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">👨‍💻</div>
<div data-node-type="callout-text"><strong>Architect</strong>: "What is your next step? And how can I help you?</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">👨‍🚀</div>
<div data-node-type="callout-text"><strong>Colleague: </strong>"I don't think I need your help. With ChatGPT, I can do these things you do in seconds. I think the next step will be to switch to an enterprise tier of GenAI tool XYZ so that all others can use it."</div>
</div>

<div data-node-type="callout">
<div data-node-type="callout-emoji">👨‍💻</div>
<div data-node-type="callout-text"><strong>Architect</strong>: "Okay cool, then good luck with that."</div>
</div>

<p>Does our astronaut know what he is talking about? A valid question to be asked by the architect would also be: "What holds you back to proceed?" I bet there are some things to mention. The most obvious: the astronaut needs help to integrate this new capability into the existing value chain. A typical task in which engineers and architects are involved. Business value is not created by just breathing air or by just introducing a new tool. Creating business value requires the interconnection of people, processes, AND technology.</p>
<p>The not-so-obvious fact is all things around data privacy and value propositions. I mean if a business decides to use a GenAI tool of-the-shelf: what happens with my business data? What data do I have to share? How sensitive or worth protecting are these data? What happens with my data? And what makes my tenant better than the tenant of our competitors?<br />How can architects help? Exactly with this!</p>
<h2 id="heading-the-spectrum-of-adopting-generative-ai">The spectrum of adopting generative AI</h2>
<p>Adopting generative AI has a broad spectrum. From just using ChatGPT somewhere in your company to integrating GenAI tools via API in your existing workflows up to building your own capabilities by fine-tuning foundational models. Depending on the adoption level, the involvement of professional engineers and roles is recommended or mandatory.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1690828984395/80584c50-bd07-4578-b302-8b2c61299af2.png" alt class="image--center mx-auto" /></p>
<p>Professional software engineering is so much more than just coding! And it is just one part of the value chain of your business.</p>
<p>We discussed this topic in the <a target="_blank" href="https://aws.amazon.com/developer/community/community-builders/">AWS Community Builders program</a>. With great feedback from several builders and subject matter experts. That was helpful to get a broad range of perspectives and highlighted the importance of thinking about the responsible use of AI.</p>
<h2 id="heading-the-value-of-architects">The value of architects</h2>
<p>Architects are needed in discussions like this to help people and businesses in creating real, measurable, and sustainable business value in the end. <a target="_blank" href="https://architectelevator.com/architecture/architecture-options/"><strong>Architects sell options</strong></a>. And this becomes especially relevant as new technology arises. Hence our contributions might be more important in the future than ever before.</p>
<p>And this gives me the feeling of still being needed in the future.</p>
<p>The more I think about this and reflect on my words, the more I conclude that our domain and business of IT was dominated by the means of being untouchable. We only heard about „IT and software eating the world/replacing jobs“ because we created this software. Now something like generative AI has the potential to destroy this worldview of „being untouchable“.  Software eats software. In a way. Fear seems to be a natural reaction. But we architects have the power and skills, <a target="_blank" href="https://architectelevator.com/architecture/multiple-dimensions/">to see things from multiple perspectives</a>.</p>
<p>But what if generative AI or LLMs have the power to replace certain tasks of architects or engineers? Coming back to the story of my first instructor during my apprenticeship. What would I recommend to him with the experience of today? Not much but:</p>
<blockquote>
<p>There is a chance of shifting perspectives!</p>
</blockquote>
<p>And maybe this is exactly the situation we are facing right now. Maybe it is time to shift perspectives. Engineers and architects that might fear being replaced can change the spectrum of responsibilities. For example by leaving the engine room to some degree and entering the level of understanding better the business domain they are in. Extending their skillset beyond just coding to build even better solutions in the future.</p>
<p>Shifting perspectives can also mean, that we all have to learn what this new technology means for us. Simple questions like this one from AWS serverless Hero Yan Cui indicate, that we all are at the very early stages of this emerging tech.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.linkedin.com/posts/theburningmonk_if-the-future-of-computing-is-prompting-ai-activity-7086600416947265537-zDzO">https://www.linkedin.com/posts/theburningmonk_if-the-future-of-computing-is-prompting-ai-activity-7086600416947265537-zDzO</a></div>
<p> </p>
<p>My take on this: I would consider them to be code. I define code as something that provides business value. And so yes: prompts (and their quality) do the same and hence I would put them in the same category.</p>
<p>Means we could also talk about how to test prompts?! What does CI/CD or TDD look like for prompts? Will some tools do the job for me? What does this mean for observability? And dear astronaut (see chat before), let's meet for a coffee, and let us talk about vendor lock-ins.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The adoption of generative AI requires both skilled architects and engineers. Businesses need roles and people with skills that can connect the dots and build bridges between business and IT. Maybe this is more important than ever.</p>
<p>Certain tasks of engineers might be replaced by AI. What can't be replaced is the knowledge about the business and how to translate a business strategy into running software. Independent if we write the code on our own or writing prompts in the future.</p>
<p>The output of LLMs is not wisdom. It is just a statistically calculated sequence of words given a concrete context. LLMs know how language work. But they don't know how to run your current business or how to create your next business strategy. And they also do not know about your individual technological, organizational, or environmental constraints.</p>
<p>But your colleagues do. Hence replacing this intellectual property is a risk that you should manage actively.</p>
]]></content:encoded></item><item><title><![CDATA[From Strategy to Execution]]></title><description><![CDATA[Cloud architects play a pivotal role in designing and building scalable and innovative solutions to solve complex business problems. As an AWS Solutions Architect, I understand the challenges that cloud architects face in aligning development teams w...]]></description><link>https://cremich.cloud/from-strategy-to-execution</link><guid isPermaLink="true">https://cremich.cloud/from-strategy-to-execution</guid><category><![CDATA[cloud architecture]]></category><category><![CDATA[business strategy]]></category><category><![CDATA[agile methodology]]></category><category><![CDATA[impact mapping]]></category><category><![CDATA[roadmap planning]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Thu, 27 Jul 2023 07:42:59 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1687335567621/c29342b9-80c0-4051-99b3-71dc98f10990.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Cloud architects play a pivotal role in designing and building scalable and innovative solutions to solve complex business problems. As an AWS Solutions Architect, I understand the challenges that cloud architects face in aligning development teams with overarching business objectives. That's why I'm excited about a powerful visual collaboration method called <a target="_blank" href="https://www.impactmapping.org/">I<strong>mpact Mapping</strong></a> that can help to bridge the gap between strategy and execution.</p>
<p><a target="_blank" href="https://www.impactmapping.org/"><strong>Impact Mapping</strong></a> is a technique that allows cloud architects and development teams to effectively align their activities with the overall business goals. It provides a consistent path from answering "Why are we doing this?" (business goal) to "How can we contribute?" (deliverables). Following this path ensures, that development efforts are directly linked to the desired business outcomes.</p>
<p>In this blog post, we will uncover how <a target="_blank" href="https://www.impactmapping.org/">Impact Mapping</a> empowers cloud architects in their mission to align development teams with business objectives. We'll explore its significance, discuss a practical scenario, and outline the process of creating an impact map. Additionally, we'll discover how <a target="_blank" href="https://www.impactmapping.org/">Impact Mapping</a> enables cloud architects to translate business outcomes into measurable KPIs, facilitating a feedback mechanism for development teams to make informed technical decisions.</p>
<p>Are you ready to learn how <a target="_blank" href="https://www.impactmapping.org/">Impact Mapping</a> can help you overcome these challenges and take your cloud architecture to new heights? Then let's get started!</p>
<h2 id="heading-the-importance-of-aligning-development-teams-with-business-goals">The importance of aligning development teams with business goals.</h2>
<p>We as architects need to connect the dots between IT strategy and business strategy. Aligning development teams with business goals can be a challenge. An impact map acts as a guiding light, enabling us to identify the most impactful deliverables that will bring about the desired changes in customer behavior.</p>
<p>By aligning development teams with business goals, several benefits can be realized. Firstly, it ensures that the efforts of the development teams are directed toward high-priority features that have a significant impact on a common business goal. This focus helps in maximizing the return on investment, optimizing resource allocation, and making a real impact.</p>
<p>Secondly, it fosters a sense of purpose and shared understanding among the team members. When people have a clear understanding of how their work directly contributes to the success of the business, it enhances their motivation and drives them to deliver high-quality solutions.</p>
<h2 id="heading-understanding-impact-mapping">Understanding Impact Mapping</h2>
<p>At its core, impact mapping is typically presented as a mind map or a similar visual hierarchy. It provides a visualized and structured approach to answering fundamental questions such as</p>
<ul>
<li><p><strong>Why are we building this?</strong> This is the overall goal we are trying to achieve.</p>
</li>
<li><p><strong>Who can contribute to reaching our goal?</strong> These are the actors that influence our goal achievement.</p>
</li>
<li><p><strong>How should our customer behavior change?</strong> This level connects actors with our goal and defines the impacts we are trying to create.</p>
</li>
<li><p><strong>How can we contribute to support an impact?</strong> Deliverables can be software features or organizational activities that focus on making an impact.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687269845165/92cf3c32-c2e6-409d-a79e-d7de2f47c3b1.png" alt="understanding impact mapping" class="image--center mx-auto" /></p>
<p>By creating an impact map, you establish a clear line of sight between business goals, desired impacts, and deliverables. The beauty of impact mapping lies in its simplicity and ease of use.</p>
<p>It not only provides a visual representation of the alignment between technical decisions and business outcomes, but it also serves as a communication tool that facilitates collaboration and shared understanding among all stakeholders. This shared understanding is crucial for making informed decisions, prioritizing features, and optimizing resource allocation.</p>
<h2 id="heading-creating-an-impact-map-for-bingewatch">Creating an impact map for Bingewatch</h2>
<p>To illustrate the practical application of impact mapping, let's dive into a scenario involving a fictitious company called Bingewatch - a leading VOD platform. Bingewatch faces the challenge of maintaining customer engagement and preventing churn as customers increasingly gravitate toward competitors. As the financial situation is very tense, the CEO of Bingewatch has decided to address this issue by calling out a business goal to increase revenue.</p>
<p>As a cloud architect supporting Bingewatch's development teams, your role is crucial in aligning their efforts with the overarching business goal of increasing customer engagement. Every team has a lot of great ideas and a huge amount of stories in their backlog. But how should our teams start? You start by creating an impact map, to guide the selection of the right features that will have a tangible impact on achieving this goal.</p>
<h3 id="heading-level-1-identify-the-business-goal"><strong>Level 1: Identify the Business Goal</strong></h3>
<p>This part is easy as the business goal was clearly articulated by the CEO. This will be the first level and starting point of our map.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687272188670/3d8aa88b-6a37-4c47-b38f-4a64ab896b1c.png" alt="identify the business goal" class="image--center mx-auto" /></p>
<p>If you are in a situation where the business goal is not that clear like in this case, ensure to define a goal that focuses on the problem that needs to be solved instead of describing the solution. Focus on the outcome, not on the output!</p>
<h3 id="heading-level-2-identify-the-actors"><strong>Level 2: Identify the Actors</strong></h3>
<p>The team identified three relevant actors that impact the desired outcome of Bingewatch.</p>
<ol>
<li><p><strong>Subscriber:</strong> Paying customers that use Bingewatch to watch movies, series, and TV shows.</p>
</li>
<li><p><strong>Marketing Team:</strong> The team responsible for promoting Bingewatch and its content and special offerings.</p>
</li>
<li><p><strong>Content Delivery:</strong> Systems responsible for delivering high-quality video streams to subscribers all over the world.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687272171719/38e490e7-1935-45dc-984e-85c964c0e444.png" alt="identify the actors" class="image--center mx-auto" /></p>
<p>Subscribers are our primary actors whose needs are already fulfilled. The Marketing Team and the Content-Delivery Team are secondary actors that provide services, facilitating the fulfillment of our primary actors' needs.</p>
<h3 id="heading-level-3-define-impacts"><strong>Level 3: Define Impacts</strong></h3>
<p>This level defines for each actor how their behavior should change to support our goal. Subscribers should be encouraged to visit the platform more often and watch more movies during their customer lifetime.</p>
<p>Our Marketing Team should promote new videos to create awareness and interest in several movies. These can be both new videos, special offers, or videos that are top-ranked in certain markets and territories.</p>
<p>Also, more technical teams like our Content-Delivery Team can contribute to our business goal. They should ensure that subscribers get a best-in-class movie experience by minimizing streaming errors and disruptions - one of the top 3 reasons for customers to cancel their subscriptions.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687272162678/6e99e715-f8d6-47a3-901a-25b96fca59d4.png" alt="define impacts" class="image--center mx-auto" /></p>
<h3 id="heading-level-4-map-deliverables"><strong>Level 4: Map Deliverables</strong></h3>
<p>Level 4 leaves a lot of room for creativity - but in a controlled and safe place ensuring that only software features or activities are considered that provide the change in behavior of an actor and therefore support our business goal.</p>
<p>By sending out reminder notifications, inactive subscribers should be reactivated to continue watching unfinished movies or series. Active subscribers should be able to set a reminder for new season launches of their favorite series. An "don't miss" feature should send out a notification to subscribers once new movies of favorite actors or genres of a subscriber are added to the catalog.</p>
<p>Providing recommended movies to watch on the home screen should help to influence the impact of more movie watches per subscriber. The team assumes that related content in the content discovery screen (Because you watched movie ABC) or similar movie recommendations on the detail page (More like ABC) will increase the number of watched movies per subscriber over time.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687272150749/f75e04e1-757f-4d2b-ba51-afbd1ae86c9d.png" alt="map deliverables" class="image--center mx-auto" /></p>
<p>This is just an excerpt of deliverables and the final list might be longer than the one you see here. During the discovery phase, there will be also ideas that are great but are not related to the behavior change of the actors and the business goal. By mapping the deliverables to the impacts, the development teams at Bingewatch can prioritize their efforts and focus on building features that will have a direct impact on increasing subscriber engagement.</p>
<h2 id="heading-making-roadmap-decisions-and-aligning-backlogs"><strong>Making Roadmap Decisions and Aligning Backlogs</strong></h2>
<p>Once the impact map is created, it's essential to translate the identified business outcomes into measurable Key Performance Indicators (KPIs). KPIs provide a quantifiable way to assess the success of the impact map and track progress toward achieving the desired business goals.</p>
<p>In our Bingewatch example, the business goal was clearly articulated. But how do we know that we achieved our goal? We can define KPIs and metrics such as:</p>
<ol>
<li><p><strong>The number of returning subscribers:</strong> This metric indicates the success in attracting subscribers to revisit the platform, demonstrating sustained engagement.</p>
</li>
<li><p><strong>Average movies watched per subscriber per month:</strong> By tracking the number of movies each subscriber consumes within a given time frame, we can gauge the level of engagement and whether subscribers are actively using the platform.</p>
</li>
<li><p><strong>Customer lifetime value:</strong> The longer customers stay as active and engaged subscribers the more revenue will be created. A key KPI for Bingewatch.</p>
</li>
<li><p><strong>Reduced stream errors:</strong> This metric reflects the quality of the streaming experience. By aiming to minimize stream errors, we enhance user satisfaction and encourage continuous usage.</p>
</li>
<li><p><strong>Subscriber churn:</strong> The fewer people leave (for example due to reduced streaming errors) the higher the customer lifetime value.</p>
</li>
</ol>
<p>By setting these KPIs and metrics, cloud architects and development teams have clear performance indicators that will help evaluate the impact of their initiatives. Regularly monitoring these metrics provides valuable insights into the effectiveness of the solutions being developed and allows for adjustments to be made to the roadmap if necessary.</p>
<p>Impact mapping serves as an excellent starting point for creating agile roadmaps and backlogs that align with business goals. It provides a framework for prioritizing features and functionalities based on their impact on the desired business outcomes.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1687272138519/2bb842a7-104e-44ad-a41e-cd46ef0c46e9.png" alt="from deliverables to team backlogs" class="image--center mx-auto" /></p>
<p>When translating the impact map into a backlog, the focus shifts to defining user stories and tasks that support the delivery of the mapped deliverables.</p>
<h2 id="heading-conclusion-build-with-impact">Conclusion: Build with impact</h2>
<p>In conclusion, impact mapping empowers cloud architects to connect the dots, building solutions that make a real impact on the business. The beauty of impact mapping lies in its simplicity and effectiveness. It provides a visual collaboration method that bridges the gap between strategy and execution, enabling cloud architects to translate business outcomes into deliverables and establish a feedback loop through measurable KPIs.</p>
<p>Integrating impact mapping into the backlog design and discovery process can enhance the alignment between development teams and business objectives. It enables you to create roadmaps and backlogs that are focused on building solutions that make a tangible impact toward the common business goal.</p>
<p>So, why not give impact mapping a try? Start by identifying the business goals, actors, impacts, and deliverables relevant to your specific context. Collaborate with stakeholders to create an impact map that aligns development teams with the overall business objectives. Use it as a guide to prioritize features, make roadmap decisions, and create backlogs that focus on building solutions with a measurable impact.</p>
<p>Remember, impact mapping is a flexible and iterative process. As you gather feedback and measure the success of your solutions, continue to refine and adjust your impact map accordingly. This ensures that you stay aligned with evolving business goals and adapt to changing market demands.</p>
<p>By incorporating impact mapping into your cloud architecture practice, you can enhance your ability to drive business outcomes, foster collaboration among teams, and deliver solutions that truly make an impact on your organization's success.</p>
]]></content:encoded></item><item><title><![CDATA[Handling Retries in Messaging Systems]]></title><description><![CDATA[When messages are flowing through your system, it's crucial to handle retries effectively to maintain robustness and guarantee message delivery. With the right patterns and tools, you can build resilient messaging architectures.
In this article, we'l...]]></description><link>https://cremich.cloud/handling-retries-in-messaging-systems</link><guid isPermaLink="true">https://cremich.cloud/handling-retries-in-messaging-systems</guid><category><![CDATA[cloud architecture]]></category><category><![CDATA[Reliability]]></category><category><![CDATA[AWS]]></category><category><![CDATA[enterprise integration patterns]]></category><category><![CDATA[messaging patterns]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Sun, 02 Jul 2023 07:00:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1686661158601/98149189-d8e9-4d11-9e77-57bcb75e642a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When messages are flowing through your system, it's crucial to handle retries effectively to maintain robustness and guarantee message delivery. With the right patterns and tools, you can build resilient messaging architectures.</p>
<p>In this article, we'll explore the world of handling retries in messaging systems by leveraging <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/"><strong>Enterprise Integration Patterns</strong></a>. These patterns provide a solid foundation for building reliable systems. With the help of AWS services, such as <a target="_blank" href="https://docs.aws.amazon.com/sns/latest/dg/welcome.html"><strong>Amazon Simple Notification Service (SNS)</strong></a>, <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-what-is.html"><strong>Amazon EventBridge</strong></a>, and <a target="_blank" href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/welcome.html"><strong>Amazon Simple Queueing Service (SQS)</strong></a>, we can implement patterns like <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/PublishSubscribeChannel.html"><strong>publish-subscribe channels</strong></a> or <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/MessageBroker.html"><strong>message broker</strong></a> and integrate them with a <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/DeadLetterChannel.html"><strong>dead-letter-channel</strong></a><strong>.</strong></p>
<h2 id="heading-understanding-publish-subscribe-channels-message-broker-and-dead-letter-channels">Understanding Publish-Subscribe Channels, Message Broker, and Dead-Letter Channels</h2>
<p>These patterns play a significant role in facilitating communication within messaging and event-driven architectures. It's important to note that relying solely on publish-subscribe channels or a message broker does not guarantee a reliable flow of messages. There are situations where message deliveries can fail, such as:</p>
<ol>
<li><p><strong>Unavailable or Unresponsive Subscribers</strong>: Messages may fail to reach their intended subscribers if they are temporarily unavailable or unresponsive. For example due to network issues, system failures, or resource constraints.</p>
</li>
<li><p><strong>Poison pill messages:</strong> Subscribers can perform validation checks on incoming messages to ensure they adhere to the expected format or criteria. If a message fails validation, the subscriber may reject it by throwing an exception, leading to a failed delivery.</p>
</li>
<li><p><strong>Misconfigured routing:</strong> If routing rules within the messaging system are not properly configured, messages may not be routed correctly to the intended targets.</p>
</li>
</ol>
<p>Dead-letter channels play an important role in improving the overall reliability of your system for certain failure types.</p>
<h3 id="heading-publish-subscribe-channels">Publish-Subscribe Channels</h3>
<p>Publish-subscribe channels broadcast messages to multiple subscribers. Publishers generate messages and publish them to the channel without knowing the identity or number of subscribers, allowing for loose coupling between components. Subscribers express interest in receiving messages by providing a dedicated output channel. In terms of message flow a publish-subscribe channel can be considered as a one-way street.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686668334346/5f986a04-1ff2-4d36-a5e7-508aeefff65e.png" alt="Publish-Subscribe channel visualization" class="image--center mx-auto" /></p>
<h3 id="heading-message-broker">Message Broker</h3>
<p>A Message broker serves as a central hub for messages in messaging systems. Often included with message router capabilities, several adapters, message filtering, message transformation capabilities, and a whole more of a messaging infrastructure, a message broker facilitates seamless communication between components. Messages can flow in any direction back and forth depending on the configured message routings.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686668350689/50ded2c6-b75b-4254-a1e9-d0ab5c36c62e.png" alt="Message broker visualization" class="image--center mx-auto" /></p>
<h3 id="heading-dead-letter-channels">Dead-Letter Channels</h3>
<p>To handle situations where message delivery to the desired destination fails, messaging systems incorporate dead-letter channels. Dead-letter channels act as a message target when a previous channel, such as a publish-subscribe channel or a message broker, detects that a message was not able to be delivered to its intended destination. In such cases, the source channels can direct them to a dead-letter channel.</p>
<p>Dead-letter channels provide an opportunity for further analysis and retries, enabling you to resolve issues that may have caused delivery failures. This enhances the overall reliability of the system.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686668368654/dc7772c7-9905-485d-9017-51dffc232200.png" alt="Dead-Letter-Channel visualization" class="image--center mx-auto" /></p>
<h2 id="heading-implementing-dead-letter-channels-in-aws">Implementing Dead-Letter-Channels in AWS</h2>
<p>Now that we understand the importance of dead-letter channels in improving the reliability of messaging systems, let's explore how we can implement this pattern in AWS using two popular services: Amazon Simple Notification Service (SNS) and Amazon EventBridge. Both services provide integration with Amazon Simple Queue Service (SQS), which serves as the dead-letter channel for capturing and handling failed events.</p>
<h3 id="heading-amazon-sns-as-your-publish-subscribe-channel">Amazon SNS as your Publish-Subscribe Channel</h3>
<p>Amazon SNS acts as a publish-subscribe channel, allowing publishers to send messages to topics, while subscribers receive messages from these topics. When configuring Amazon SNS, you can set up multiple subscriptions within a topic to direct messages to different subscribers.</p>
<p>Amazon SNS comes with an implicit retry mechanism called <a target="_blank" href="https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html">delivery policies.</a> This policy defines how Amazon SNS retries the delivery of messages when the subscriber is not able to process a message.</p>
<p>To incorporate a dead-letter channel for a specific subscription in the workflow, you can configure the subscription to integrate with an Amazon SQS dead-letter queue. This means that if a message published to a topic fails to be delivered to a particular subscribed endpoint, it will be sent to the associated dead-letter queue for that specific subscription.</p>
<p>By configuring dead-letter queues on individual subscriptions, you can ensure that failed messages are captured and processed accordingly, mitigating the risk of message loss and enhancing the overall reliability of your messaging system if delivery policies do not meet the required criteria.</p>
<p>The AWS Documentation gives good guidance on how to <a target="_blank" href="https://docs.aws.amazon.com/sns/latest/dg/sns-dead-letter-queues.html">set up Amazon SNS dead-letter queues</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686668380097/f5ba113b-a1e9-43c9-962f-7553f9eef88c.png" alt="Amazon SNS with dead-letter queue" class="image--center mx-auto" /></p>
<h3 id="heading-amazon-eventbridge-as-your-message-broker">Amazon EventBridge as your message broker</h3>
<p>Amazon EventBridge acts as an implementation of a message broker. When setting up Amazon EventBridge, you can define rules that determine how messages are processed and delivered.</p>
<p>Amazon EventBridge comes with an implicit retry mechanism called <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rule-dlq.html">event retry policies.</a> This policy defines how Amazon EventBridge retries the delivery of messages when a target is not able to process a message.</p>
<p>To implement a dead-letter channel you can configure the rules targets to integrate with Amazon SQS dead-letter queues. This allows you to specify a dead-letter queue at the target level, meaning that messages failing to be delivered to a specific target will be sent to the associated dead-letter queue if event retry policies do not meet the required criteria.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686668442186/b7ed0c2c-2e47-482e-9f5c-9b2a2a80ee94.png" alt="Amazon Eventbridge with dead-letter-queue" class="image--center mx-auto" /></p>
<p>It is a bit hidden in the AWS documentation but here's how you can do it in a nutshell:</p>
<ol>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-dead-letter-queue.html">Set Up an Amazon SQS Dead-Letter Queue</a>: Create an Amazon SQS dead-letter queue that captures events not successfully delivered to their intended targets. Make sure to add the required resource policies on the queue, granting Amazon EventBridge <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rule-dlq.html#eb-dlq-perms">permission</a> to write messages into the queue.</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule.html">Create a rule</a>: Begin by creating rules within EventBridge that specify the conditions for event processing. These rules can include filtering criteria, event patterns, or specific sources.</p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-create-rule.html#eb-create-rule-target">Configure Dead-Letter-Queue for a target</a>: When defining the targets for your EventBridge rules, you can configure a dead-letter queue for individual targets within the rule configuration. This allows you to specify a specific dead-letter queue for each target, which will receive events that fail to be delivered to the intended destination. By configuring a dead-letter queue for a target, you ensure that any events that encounter delivery issues for that specific target are captured and redirected to the dead-letter channel for further analysis and handling.</p>
</li>
</ol>
<p>This ensures that any messages unable to reach their intended targets are captured, allowing for further analysis or retry attempts to mitigate the risk of message loss.</p>
<h2 id="heading-best-practices-for-handling-retries">Best Practices for Handling Retries</h2>
<p>Just because we have a dead-letter channel doesn't mean our system is capable of handling message retries. The missing part is connecting the original message recipients with the dead-letter channel once they can process them.</p>
<p>Effective retry mechanisms in messaging systems require careful consideration and adherence to best practices. Here are some key considerations and strategies to help you handle retries successfully. A lot of these strategies can be implemented with <a target="_blank" href="https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-rule-dlq.html">Amazon Eventbridge</a> and Amazon SNS dead-letter queue features.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1686672139944/c4f91c0b-2428-404c-bceb-6ad553f9d32c.png" alt="Handling retries visualization" class="image--center mx-auto" /></p>
<h3 id="heading-key-considerations-for-implementing-retries">Key Considerations for Implementing Retries</h3>
<p><strong>Identify failure scenarios:</strong> Understand the potential reasons for message delivery failures, such as network issues, temporary service unavailability, or exceeded throughput limits. Identifying these failure scenarios allows you to design appropriate retry mechanisms.</p>
<p><strong>Define maximum retry attempts:</strong> Determine the maximum number of retry attempts for a failed message. Setting a limit prevents infinite retries and ensures that processing moves forward after a reasonable number of attempts.</p>
<h3 id="heading-strategies-for-retry-policies-and-exponential-backoff">Strategies for Retry Policies and Exponential Backoff</h3>
<p><strong>Exponential backoff:</strong> Implement an exponential backoff strategy where the delay between retries increases exponentially with each subsequent attempt. This approach helps avoid overwhelming downstream systems during transient failures and improves the chances of successful retries.</p>
<p><strong>Jitter:</strong> Introduce jitter by adding randomization to the retry timings. This randomization further reduces the likelihood of congestion during retry attempts and helps distribute the load more evenly.</p>
<h3 id="heading-monitoring-and-troubleshooting-retries">Monitoring and Troubleshooting Retries</h3>
<p><strong>Monitor dead-letter channels:</strong> Regularly monitor the dead-letter channels to identify patterns or recurring issues that may be causing message delivery failures. Analyze the messages stored in the dead-letter queues to gain insights into the root causes of failures and take appropriate corrective actions.</p>
<p><strong>Logging and error reporting:</strong> Implement comprehensive logging and error reporting mechanisms to capture and record detailed error information during retries. This data is valuable for error analysis, debugging, and identifying areas for improvement.</p>
<h3 id="heading-handle-poison-pill-messages">Handle Poison Pill Messages</h3>
<p><strong>Identify poison pill messages:</strong> Poison pill messages are messages that repeatedly fail during processing. Detect and identify these messages to prevent them from entering infinite retry loops.</p>
<p><strong>Move poison pill messages to a separate queue:</strong> When a message repeatedly fails a predefined number of retries, consider moving it to a separate queue for manual inspection and handling. This prevents the message from continuously being retried, preserving system resources.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>We have explored the importance of handling retries in messaging systems and discussed various strategies and best practices to enhance the reliability of message processing. We started by understanding the role of publish-subscribe channels and message brokers in messaging systems. We then delved into the significance of dead-letter channels as a mechanism to handle failed message deliveries.</p>
<p>We explored how dead-letter channels can be implemented with Amazon SQS integrated into popular AWS services such as Amazon SNS and Amazon EventBridge. By leveraging the power of Amazon SQS dead-letter queues, we can capture and analyze failed messages, allowing for retries and resolution of delivery issues.</p>
<p>Furthermore, we discussed key considerations for implementing retries, including identifying failure scenarios, defining maximum retry attempts, and establishing appropriate timeout thresholds. We highlighted the importance of exponential backoff and introduced strategies to prevent infinite loops when dealing with poison pill messages.</p>
<p>Handling retries in messaging systems requires a thoughtful approach and an understanding of the potential failure points within your architecture. By following the best practices outlined in this article, you can improve the reliability of your event-driven systems, reduce message processing failures, and provide a robust and resilient experience for your users.</p>
]]></content:encoded></item><item><title><![CDATA[Real-Time client updates without the overhead]]></title><description><![CDATA[Real-time updates are essential for many web applications, from chat rooms to real-time data visualization. A traditional technique for achieving real-time updates is called "polling". In this article, I will give a brief overview of two essential cl...]]></description><link>https://cremich.cloud/real-time-client-updates-without-the-overhead</link><guid isPermaLink="true">https://cremich.cloud/real-time-client-updates-without-the-overhead</guid><category><![CDATA[AWS]]></category><category><![CDATA[APIs]]></category><category><![CDATA[API Gateway]]></category><category><![CDATA[cloud architecture]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Wed, 01 Mar 2023 07:30:39 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1677619446521/524a881a-7702-4a07-aee2-1790ac830a3d.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Real-time updates are essential for many web applications, from chat rooms to real-time data visualization. A traditional technique for achieving real-time updates is called "polling". In this article, I will give a brief overview of two essential client-initiated techniques called "short-" and "long-polling". You will learn the differences between both approaches and how they influence the underlying implementation using various AWS services.</p>
<h2 id="heading-polling-in-several-flavors">Polling in several flavors</h2>
<h3 id="heading-short-polling">Short-Polling</h3>
<p>Polling (also named Short-Polling or Ajax-Polling) is like being on a road trip with your kids when they ask every minute "Are we there?" Clients (kids) send regular requests to the server (dad) to check for updates. My kids determine the interval on when to ask for updates. Typically this interval is a fixed rate for example every X seconds. Sometimes both kids and clients might get crazy by further decreasing the interval of requests while increasing the stress level for the server or me as a dad. 🤯</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677616123662/c21783c5-aa54-42e2-b72c-7af78c17ba28.png" alt="Short polling - client initiated short-lived requests for data updates at a fixed rate" class="image--center mx-auto" /></p>
<p>Browser capabilities that enable sending HTTP requests using client-side javascript offered a whole new world of options during the early days of Web 2.0. Polling was one of the things that were possible since then. It impresses with its simplicity of implementation while it is not that efficient. It has some important things to think about when building such integrations - even using modern cloud platforms.</p>
<p>When working for a german broadcasting station, we implemented polling in a companion voting app for a live show on TV. The polling target was a configuration file hosted on Amazon S3. Some backend processes updated the configuration file once a new voting was activated. This happened regularly every 5 minutes - the client requested the configuration every 3 seconds. The majority of the time, the client fetched data he already knows. Not that efficient. Imagine you set up polling towards a good old Apache Tomcat Server where every request is bound to a running thread. More clients combined with short request intervals can make your data center burn.</p>
<p>Due to its constant request intervals, polling increases the risk of increased server load and latencies. Finding the right interval settings and right integrations is key to providing a good user experience. Bear in mind, that if your request interval is shorter than the average response time from the server, your client will request faster than the server can respond. This will likely end up in a mess.</p>
<h3 id="heading-long-polling">Long-Polling</h3>
<p>Besides the traditional short- or ajax- polling there is an option of so-called "long-polling". Another interesting approach on how to provide your client applications with data updates.</p>
<p>Picking up the analogy of a road trip with my family, long polling would be my kids (client) asking me (server) regularly about "Are we there?" but more patiently (god, how cool would that be 😇). My kids hit me with the question but waiting in silence while I am checking my navigation system for updates.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677616132998/56834ccd-baa0-49e5-8f50-34877e293f5b.png" alt="Long polling - client initiated long-lived requests for data updates at a fixed rate relative to the response" class="image--center mx-auto" /></p>
<p>What makes this kind of polling a "long-polling" is not necessarily that the client increases the interval of requests to multiple seconds or even minutes. It is more about the client being able to keep a TCP connection open and waiting for a longer timeframe for a response from a server. On the server side, this means, that the TCP connection is kept open until either the server runs in a timeout or a data change was recognized so that we can respond to the clients. There are resources blocked on both sides of the perimeter waiting for a signal to close the connection. This signal can be a captured data change or a technical timeout. Long-Polling has a contract that both sides have to agree on otherwise this will end up in a huge mess if your client assumes it is a short-polling 🤣</p>
<p>Keeping the connection open between the client and server is the key characteristic of long-polling. This reduces the chattiness but has some implications for the underlying implementation as we need something capable of keeping a connection to the client open for a longer time. Typically the modern web is optimized to finish a request-response cycle as fast as possible. It also requires that the server has to manage those open connections at any expected scale.</p>
<p>Resources are finite so are threads on my good old Apache Tomcat server. Using appropriate timeouts and error handling is important to improve reliability. But it can be an effective technique by reducing server load and network overhead.</p>
<h2 id="heading-implementation-considerations">Implementation considerations</h2>
<p>Nowadays we have options to use managed services for your APIs like Amazon API Gateway or AWS AppSync. An excellent starting point for polling integrations. To reduce latency and improve scaling I would recommend using direct-service integrations whenever possible. For example by directly reading data from an Amazon DynamoDB table without any kind of additional compute layer in between. Backend processes can implement data-capturing scenarios to update data in your database that will then be fetched with upcoming client requests. Whenever you put some compute services behind your API, you have to ensure these additional layer scales.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677618066492/0ef278dd-c7aa-48d1-bfb7-4e98a052c325.png" alt="short polling with direct service integrations" class="image--center mx-auto" /></p>
<p>For long-polling integrations, we need something that can keep a TCP connection open while regularly checking for updates in the back. This makes it hard for direct service integrations in our Amazon API Gateway or AWS Appsync API. In this case, we need some compute layer between the API and the database that can maintain TCP connections. In this case, I would go with something containerized like for example AWS Fargate that integrates with a database like Amazon DynamoDB to watch for data changes. In some situations, I might also challenge using an Amazon API Gateway by connecting my clients via an Application-Load-Balancer directly with my backend service. It depends a bit, on what kind of features from the managed API service you need in your use case.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1677618081462/3c52fc1f-4e9b-47e3-beb3-cdefb74dc7b3.png" alt="long polling with stateful integrations" class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Real-time updates are essential for many web applications. Two popular techniques for achieving these updates are short-polling and long-polling. Both have in common, that the client initiates the process of getting updates by regularly asking the server for fresh data.</p>
<p>Short-polling is an efficient but potentially risky technique if clients are getting crazy and out of control. Long-polling might be a more reliable technique but requires some additional implementation considerations on managing the lifecycle of the client connections on the server. With careful consideration of the advantages and disadvantages of each technique, developers can choose the best approach for their real-time update needs. Good observability is key to keeping control of your polling-based integrations.</p>
]]></content:encoded></item><item><title><![CDATA[Create meaningful architecture diagrams using the C4 model]]></title><description><![CDATA[What do typical architecture diagrams look like? A bunch of boxes or icons connected by some dashed or solid (sometimes both) lines? While this can be a good starting point, it might not be what we as architects want to achieve in the end.
In this ar...]]></description><link>https://cremich.cloud/create-meaningful-architecture-diagrams-using-the-c4-model</link><guid isPermaLink="true">https://cremich.cloud/create-meaningful-architecture-diagrams-using-the-c4-model</guid><category><![CDATA[c4]]></category><category><![CDATA[solutionarchitect]]></category><category><![CDATA[software development]]></category><category><![CDATA[software architecture]]></category><category><![CDATA[cloud architecture]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Fri, 27 Jan 2023 10:29:54 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1674773639439/8ad902e8-4222-46c1-ac91-c188e70080a9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>What do typical architecture diagrams look like? A bunch of boxes or icons connected by some dashed or solid (sometimes both) lines? While this can be a good starting point, it might not be what we as architects want to achieve in the end.</p>
<p>In this article, I will describe how <a target="_blank" href="https://c4model.com/">C4 context diagrams</a> help me to create meaningful architecture diagrams and helps our teams to build better solutions. Based on a practical example from one of my past enablements, you will learn how stepping back and zooming out supports you to tackle complexity, build a shared mental model and results in better decision-making.</p>
<iframe width="100%" height="500" src="https://www.youtube-nocookie.com/embed/MFvGVNEc8XI?si=E6k9jsJLVv0mlVmZ"></iframe>

<h2 id="heading-tales-from-a-migration-project">Tales from a migration project</h2>
<p>It was the end of 2021 when the "project wheel of fortune" selected me to support a team on its journey to migrate a Customer-Identity-Management (CIAM) solution from Vendor A to Vendor B.</p>
<p>In the past years, there was one single product owner who had ownership of a bunch of domains related to customer relationships. This included customer acquisition, customer support and digital marketing. With the help of some external agencies, the product owner was able to set up a CIAM solution including login and registration flows by using a no-code approach. The result of his work was a set of widgets that can be embedded into websites or mobile applications. Once the product owner finished the configuration, he asked the team that is responsible for the companies website to embed those widgets so that customers can</p>
<ul>
<li><p>register for a new account</p>
</li>
<li><p>login with their credentials or social identity providers</p>
</li>
<li><p>manage their profile</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674835195871/764bb35b-0fd3-4fe4-b938-1a48d37666f4.png" alt="A typical signup with integrated newsletter subscription feature" class="image--center mx-auto" /></p>
<p>The image above shows a <strong>typical signup flow that covers multiple business domains including identity management and digital marketing</strong> - in this case, implemented as a consent toggle that asks the customer if a subscription for the newsletter should be created along with signing up for a new account. A nice composition of multiple use cases.</p>
<h3 id="heading-when-architecture-diagrams-scream-for-help">When architecture diagrams scream for help</h3>
<p>One of the lead developers of the team was well-prepared for the project kickoff of the CIAM migration. He spent a lot of time catching up on the development of the last years and he created an architecture diagram of the current status quo. And he said</p>
<blockquote>
<p>I told you, this will be more complicated than we all expected!</p>
</blockquote>
<p>Do you know this uncomfortable feeling of standing in a crowded elevator? Architecture diagrams feel the same when the number of boxes and arrows outweighs the available space on your sheet. By carefully listening, you can hear them screaming for help - silent and lonely. I call this: <strong>A screaming architecture.</strong></p>
<p>There are boxes over boxes connected with lines in different weights, and styles using multiple connectors. Sometimes dashed, sometimes solid. There might be also cases where boxes have inner boxes that occasionally have inner boxes. Some of these boxes are big, some are small, some have borders, and some have none. An impenetrable jungle of shapes and colors with some flavor of abstract art.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674762956925/948c0fb8-df23-46d2-9721-5fb78dc55e9e.png" alt class="image--center mx-auto" /></p>
<p>There is a saying that a picture is worth more than a thousand words. With this screaming architecture diagram the opposite is true. For me, the clearly articulated intention of this diagram was a warning sign for all stakeholders involved in this project to not underestimate the effort of our migration project. There was incredible value and feedback behind those boxes and lines. We as architects can use this to better support teams.</p>
<p><strong>You can see a lot of dimensions when you start looking at those diagrams from different perspectives and start talking with people to let them express their intentions and thoughts.</strong> At first sight, it was the sign of a tightly coupled system. At second sight, it was a sign that we do not understand the context we are in and the problems we have to solve. I felt like sitting in front of a knotted parachute right before jumping out of a flying plane for our first formation jump. My intention told me: "You better untie the knots and sort things before you jump out of that plane."</p>
<p>But it was good that this diagram was created. It was the start of a series of very insightful discussions.</p>
<h3 id="heading-building-a-solution-starts-by-understanding-the-problem">Building a solution starts by understanding the problem</h3>
<p>The original (screaming) architecture diagram and the underlying user journey revealed interesting dependencies, intentions and terms. By interviewing stakeholders, we got an idea of the current state of domain language and mental models. The product owner told us about two types of registrations:</p>
<blockquote>
<p>A full registration means you register for a new account either with an e-mail/password or with one of the given social identity providers. All these accounts are flagged with the type of "full-registration".</p>
<p>A light registration is a newsletter subscription. It is called light because you only have to provide an e-mail address without a password or any other information about your profile. It is just for managing our newsletter subscribers. All these accounts are flagged with the type of "light-registration" but you cannot log in as you have not provided a password or something like this.</p>
</blockquote>
<p>We identified, that this thing called "light registration" was the concept of a newsletter subscription pressed into a CIAM domain. <strong>The reason why it was built like this, was because the newsletter system did not provide any no-code integrations into customer-facing applications.</strong> The only option the product owner considered was to configure yet another modified registration flow within the old CIAM system. And asking an agency to build some ETL jobs to move data from the CIAM into the newsletter system and vice versa. This resulted in a tremendous amount of technical debt and complexity. Completely ignoring different problem spaces, system purposes, business domains and the fact that not every use-case that requires an E-Mail address and consent is by default related to CIAM.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674835227772/5850d0a2-60a4-4c38-8918-80c7402dac0a.png" alt="Is this customer identity management?" class="image--center mx-auto" /></p>
<p>The discussions we had while trying to understand the status quo were extremely valuable. It highlighted:</p>
<ol>
<li><p><strong>Unclear domain boundaries:</strong> Domains of Identity Management and Digital Marketing (Newsletters) were like two magnets attracting each other over time resulting in low cohesion and tight couplings on a system level.</p>
</li>
<li><p><strong>Insufficient domain knowledge of the team:</strong> As the job of the development team was just to embed some widgets, they never had the chance to take a step into the real problem space. They never had the chance to challenge things and their solution space was just thinking about how to embed external widgets on the website.</p>
</li>
</ol>
<p>Whenever I get this overwhelming feeling of not understanding the context I am in, of too much complexity or whenever I need too much time to process the vast amount of information transported in a diagram - <a target="_blank" href="https://architectelevator.com/architecture/architects-zoom/">I know it is time for a new perspective</a>. It is time to step back and zoom out.</p>
<p>This helped to shift the focus from just looking at the technical-centric status quo, more into a customer-centric state-to-be. <strong>We started to embrace problem-first thinking by zooming out.</strong></p>
<h2 id="heading-if-you-get-lost-in-boxes-and-lines-zoom-out">If you get lost in boxes and lines - zoom out</h2>
<p>I use the C4 model to design software solutions. The general idea of this model is, to provide several layers with a defined purpose and perspective of your solution. Every layer is connected and allows you to zoom in and out at will.</p>
<ol>
<li><p>A <strong>Context Diagram</strong> is a diagram with the highest zoom level. It provides a holistic view of the system you are building or extending.</p>
</li>
<li><p>A <strong>Container Diagram</strong> is the first zoom level that reveals the inner building blocks of your system. It provides a more detailed perspective of the systems components and their interaction. Each component is named generic as a container (NOT DOCKER) hence its designation. A container represents something that needs to be running in your system for it to work. This can be an application, service or data store.</p>
</li>
<li><p>If you zoom into a container you find yourself in the middle of a <strong>Component Diagram</strong>. On this level of detail, you get the perspective of the parts your container is composed of. In the context of building solutions running on AWS, I often use AWS Service-Icons or enterprise integration patterns to express implementation details.</p>
</li>
<li><p>Last but not least there is the level of <strong>Code Diagrams</strong> that show you actual code structures and relationships of classes, functions or other low-level details.</p>
</li>
</ol>
<h3 id="heading-elements-of-a-context-diagram">Elements of a context diagram</h3>
<p>The <strong>primary element</strong> is the blue box describing the software system you want to build. Sometimes you end up having more than one system and from my perspective, this is okay for the moment.</p>
<p><strong>Supporting elements</strong> are the people (users, actors, roles,...) and external systems (the grey boxes). External systems are those your team has no or limited control about. But we can not ignore those systems as they somehow interact with our system of scope.</p>
<p>Connections between the primary system and the supporting elements describe the <strong>flow of information and data</strong> in between. Connections can also be used to describe certain use cases or intentions of how to use the system.</p>
<h3 id="heading-our-ciam-context">Our CIAM context</h3>
<p>By providing several levels and layers, the C4 model supports my maneuver of zooming out. We started to increase the altitude to get a more holistic perspective on what we wanted to achieve. So we started to draw a context diagram by putting the system we build - a CIAM - in the center of our diagram.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674770842215/2c4c3cb8-1ae3-4b7f-96c1-9d407ca3f438.png" alt class="image--center mx-auto" /></p>
<p>Describing the world around CIAM was the next important step. Start by asking yourself and the team: "What are the systems and users that interact with the one we build?". The level of detail is not important at this stage as we intentionally zoomed out and want to see the bigger picture.</p>
<p>The main focus here is on actors, roles, personas and software systems instead of technologies, cloud services or other implementation details. Prevent yourself (and your team) from getting lost in details by putting things like API Gateways, S3 Buckets etc on the surface. We don't want to create yet another screaming architecture. Try to define the software systems instead of the individual components. If the context diagram contains a level of detail that you could show non-technical people, you are on the right track.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674770924550/b31107a2-afaf-4fd2-9b6f-fff97db3be1e.png" alt class="image--center mx-auto" /></p>
<p>This step enables valuable discussions with a lot of feedback and discovery opportunities. Instead of just drawing lines and boxes we started to embrace problem-first thinking. We started to think more about HOW a system is used instead of WHAT the system is composed of.</p>
<p>We ended up challenging the current state of service composition and decided how we want to modularize the systems. This resulted in a more domain-aligned composition of systems with <a target="_blank" href="https://www.dddheuristics.com/design-heuristics/align-with-domain-experts/">clearer boundaries and responsibilities</a>. Our CIAM system of the future should be responsible for all things related to our customer lifecycle, authentication and authorization. Our newsletter system should be decoupled from CIAM concerns but at some point integrated. The backend of the website as another external system should route and orchestrate information and data to the right systems depending on the given use cases. The final context diagram looked like this.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1674815081170/4197e27c-1c2f-4d74-ad73-222e42c4e0de.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-feedback-after-running-in-production">Feedback after running in production</h3>
<p>We see <strong>improvements along the way in terms of maintainability and interchangeability of systems</strong> after having this now in production for several months.</p>
<p>A nice story that underlines especially the interchangeability improvements: The service contract with the newsletter system is about to end. The team gave the feedback that exchanging the existing newsletter system with something else is now easier than before. That is great to hear and a great achievement of the work of the whole team.</p>
<p>We also see that this way of reducing complexity gives both our digital marketing and CIAM team more flexibility in doing their job. By having this clear separation of concerns, both teams can work more independently. Resulting in <strong>less collaboration and communication across teams</strong> along with overall improvements in software delivery performance.</p>
<h2 id="heading-wrap-up">Wrap-Up</h2>
<p>Making a step back and creating a context diagram with the team was a success story and helped to improve systems thinking. <strong>We ended up having a common shared mental model of the thing we want to build</strong>. And we defined clear boundaries along business domains with a stronger separation of concerns.</p>
<p>Good whiteboarding and diagram skills are essentials in your architect toolbox. The easy part is to grab a marker and then start and draw some lines and some boxes. But does this help? Does this make an architecture diagram meaningful and a whiteboarding session useful? I don't think so. An architecture diagram is not a silver bullet. Depending on what you want to express, it is more than just a bunch of boxes connected by some lines. T<strong>here are several aspects and approaches you can use to give your architecture diagrams more meaning and increase the value of those assets</strong>. Zooming out is one aspect and it can be a powerful method. In this post, I described my learnings from using C4 context diagrams to</p>
<ul>
<li><p>form a shared mental model,</p>
</li>
<li><p>a ubiquitous language and</p>
</li>
<li><p>to foster problem-first thinking.</p>
</li>
</ul>
<p>What's the purpose of building software? The purpose is to solve a customer problem. <strong>Building a solution starts by understanding the problem we want to solve. Not by creating screaming architecture diagrams.</strong></p>
<p>/</p>
]]></content:encoded></item><item><title><![CDATA[Why you should use Amazon Pinpoint for solving your engagement challenges]]></title><description><![CDATA[In AWS you have multiple options to send native push notifications to your mobile applications - Amazon SNS or Amazon Pinpoint. But which one should you use? In this article, I will describe why I prefer to use Amazon Pinpoint to solve common marketi...]]></description><link>https://cremich.cloud/engaging-football-fans-with-mobile-push-notifications</link><guid isPermaLink="true">https://cremich.cloud/engaging-football-fans-with-mobile-push-notifications</guid><category><![CDATA[serverless]]></category><category><![CDATA[engagement]]></category><category><![CDATA[AWS]]></category><category><![CDATA[sns]]></category><category><![CDATA[PinPoint]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Wed, 04 Jan 2023 09:00:42 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1672777054806/6ab93e10-aeb6-460b-920b-fa15ebd4e76a.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In AWS you have multiple options to send native push notifications to your mobile applications - Amazon SNS or Amazon Pinpoint. But which one should you use? In this article, I will describe <strong>why I prefer to use Amazon Pinpoint</strong> to solve common <strong>marketing and engagement challenges</strong> and how this service compares to Amazon SNS.</p>
<p>One of the key capabilities of (mobile) sports applications is, to inform fans about what is happening during the game. This includes for example notifications about important events like kick-offs, goals or cards. <strong>The majority of mobile sports applications use push notifications as the primary channel to keep their fans up to date</strong>. Depending on the application platforms, notifications are sent directly to the devices with either the Apple Push notification service (APNS) or the Google Cloud Messaging (GCM) service.</p>
<h2 id="heading-fan-engagement-has-multiple-perspectives">👀 Fan engagement has multiple perspectives</h2>
<p>When we talk about fan engagement and all the engagement-related challenges of marketers these days, we quickly come to the point that solving these challenges goes beyond just sending a notification or message.</p>
<p>Companies that are able to segment their audience put themselves in the position to create and communicate specific targeted marketing messages that align with the interests and emotions of specific customer groups. According to the <a target="_blank" href="https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/the-value-of-getting-personalization-right-or-wrong-is-multiplying">"Next in Personalization 2021"</a> report, <strong>72% of consumers expect brands to demonstrate they know them on a personal level.</strong> <a target="_blank" href="https://www.forbes.com/sites/forbesagencycouncil/2021/03/09/why-trust-defines-success-in-customer-engagement/?sh=6c09c02b1ec2">Trust defines customer engagement.</a> Personalization is not just a recommendation engine. Personalization is relevant for a variety of domains. <strong>It is a commitment to streamlining your activities according to many customer demands.</strong></p>
<p>And the future of fan engagement is omnichannel. Targeting the right fans, at the right time using the right - often - multiple channels. This requires as always the right people, the right processes and the right technology.</p>
<blockquote>
<p>Investments in omnichannel are improving, but still have a long way to go. [...]</p>
<p>Only 35% of companies feel, they are sucessfully achieving omnichannel personalization, up from 24% in 2021.</p>
<p>Source: <a target="_blank" href="https://segment.com/state-of-personalization-report/">https://segment.com/state-of-personalization-report/</a></p>
</blockquote>
<p>Leveraging notification channels for sports applications can be seen from two perspectives. Those perspectives each come with different technical, functional and non-functional requirements. They also put different kinds of KPIs in the focus of decision-making and success evaluation.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672753795873/bb4a126d-71a2-484a-ae06-40c6e2b29425.png" alt="Flow of notifications - Match schedule as the main driver of engagement" class="image--center mx-auto" /></p>
<h3 id="heading-the-first-perspective-the-matchday">⚽️ The first perspective: the matchday</h3>
<p>Sports in general and football matches in particular can write awesome stories. And it is not surprising that the things that happen on the pitch are usually the main driver of engagement. There is not much to add if your favorite club strikes a leading goal in your local derby match. Or if your favorite player gets substituted in and takes your club to finish a match with a draw or even a win after being behind the whole match. <strong>In this case, you might not necessarily look at the opening rates of our match-event-focused push notifications.</strong> Those notifications can be seen as an additional layer of engagement. Encouraging fans to open those notifications depends on so many factors. Opening notifications gets unlikely if fans watch a match live. It is getting more likely if they don't have access to watch a match live for any reason. In both cases, you gain a lot of additional opportunities depending on your fan behavior.</p>
<p>1. 📺 𝗙𝗼𝗿 𝗳𝗮𝗻𝘀 𝘁𝗵𝗮𝘁 𝘄𝗮𝘁𝗰𝗵 𝗮 𝗺𝗮𝘁𝗰𝗵 𝗹𝗶𝘃𝗲, those notifications can provide and engage with additional or "unseen" information. If a fan watches a match live, the information that a goal happened is already transported via the big screen. Storytelling can be extended to further increase engagement by further mixing our real-time sports data with such marketing channels.</p>
<p>2. 📲 𝗙𝗼𝗿 𝗳𝗮𝗻𝘀 𝘁𝗵𝗮𝘁 𝗮𝗿𝗲 𝗡𝗢𝗧 𝗮𝗯𝗹𝗲 𝘁𝗼 𝘄𝗮𝘁𝗰𝗵 𝗮 𝗺𝗮𝘁𝗰𝗵 𝗹𝗶𝘃𝗲, latency can be an important USP of your product. We want our fans to cheer first. To be the king or queen in a group of people. Imagine you get the notification first, that your favorite team won the championship. This will be the ultimate hugging guarantee from the Bundesliga. Try it out!</p>
<p>This means: <strong>match-related notifications are highly contextual</strong>. Combined with the fact that sending out notifications about sports events can produce a lot of notifications in the timeframe of a match, you have to think about strategies to prevent fan churn. Think about segmentation and what kind of fans you want to target and provide value. Otherwise, keep in mind that your fans might simply ignore your notifications (and the effort you spent in sending them out) or <a target="_blank" href="https://www.businessofapps.com/marketplace/push-notifications/research/push-notifications-statistics/">leave your platform</a> and deinstall your application for several reasons.</p>
<blockquote>
<p>Segmentation increases the likelihood that customers will engage with the brand, and reduces the potential for communications fatigue — that is, the disengagement of customers who feel like they’re receiving too many messages that don’t apply to them.</p>
<p><strong>Source:</strong> <a target="_blank" href="https://aws.amazon.com/blogs/messaging-and-targeting/use-machine-learning-to-target-your-customers-based-on-their-interest-in-a-product-or-product-attribute/">Target your customers with ML based on their interest in a product or product attribute</a></p>
</blockquote>
<h3 id="heading-the-second-perspective-between-the-matchdays">😴 The second perspective: between the matchdays</h3>
<p>Or how I call it: the fan wakeup-call. Extending the matchday experience between matches can be a very important task to keep your level of engagement and retention stable. <strong>It is a typical pattern to have high engagement on matchdays, followed by a drop of your engagement-related KPIs once a matchday is over.</strong> Several marketing strategies and campaigns help you to extend the so-called matchday experience. Either post-match by sending out notifications about highlight clips, interviews or match reports. Or pre-match by engaging our fans with potential line-ups, injured players or relevant background information about the upcoming matches. In this case, more classic metrics like open rates or session metrics are very valuable KPIs to measure engagement and success.</p>
<h2 id="heading-from-pitch-to-push-notification">🏟️ From pitch to push notification</h2>
<p>What happens behind the scenes when a goal was shot and you want to use this as a trigger to send a push notification? Let us zoom out a bit and let us take a closer look at a real-life example from the professional football league in Germany: the <a target="_blank" href="https://www.bundesliga.com">Bundesliga</a>.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672753970296/4c4dadb6-7d9c-449d-879d-0b8508f79277.png" alt="From pitch to push notification - very high level architecture" class="image--center mx-auto" /></p>
<p>The DFL subsidiary company - <a target="_blank" href="https://www.linkedin.com/company/sportec-solutions-ag/">Sportec Solutions AG</a> - is the official data provider for all data around Bundesliga and Bundesliga 2 football matches. From here we get all the events like goals, substitutions, cards, and fouls in real-time. This enables the Bundesliga, to build those great digital products like the official Bundesliga App.</p>
<p>When a player shoots a goal, this information is pushed to a match data processing service. The main responsibility of this service is to receive and process all events that occur during a match and decide how to act on those events.</p>
<p>One example of how to act on events like goals, cards or kickoffs is to send out a push notification using Amazon Pinpoint. This engages especially those fans who are not actively using the Bundesliga Apps. Those events are not simply broadcasted to all fans. The relevant target segments are selected based on the type of the event, the associated match and the interest of fans receiving specific events. This will result in more <strong>specific targeting and sending out notifications to fans that have an explicitly defined interest in receiving this notification</strong>. Everything is fully automated.</p>
<p>Defining the right level of segmentation is an important success factor in your engagement and marketing story. This can result in building several layers of segments that allow you to target our fans using very specific characteristics like</p>
<ul>
<li><p>⚽️ a match,</p>
</li>
<li><p>🗓 a matchday,</p>
</li>
<li><p>🐐 a club or</p>
</li>
<li><p>🏟 individual events.</p>
</li>
</ul>
<p>Amazon Pinpoint gives you the capabilities to create and the flexibility to adapt your segmentations at any given time.</p>
<p>If you want to know more about the official Bundesliga match data, I can highly recommend the following video which explains the whole process in detail.</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=DSAYcek__ic">https://www.youtube.com/watch?v=DSAYcek__ic</a></div>
<p> </p>
<h2 id="heading-why-you-should-use-amazon-pinpoint">🔔 Why you should use Amazon Pinpoint?</h2>
<p>Generally speaking, you can achieve to send push notifications in two recommended ways - either using Amazon SNS or Amazon Pinpoint. Although both services have similar capabilities - like sending push notifications, E-Mails or SMS - they have different intentions. <strong>Yes, we can say they are united in the cloud but divided by purpose.</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1672754046030/c6cffdec-8096-438e-b983-a2b90979a441.png" alt="United in cloud, divided by purpose - Comparing Amazon SNS and Amazon Pinpoint" class="image--center mx-auto" /></p>
<p>📢 You will find Amazon SNS - the Simple Notification Service - in the category of “Application Integration”. <strong>From my perspective, the main purpose is more related to pure technical use cases.</strong> Often used when you need to implement messaging scenarios, event-driven-architectures or want to decouple components and systems. In a nutshell, Amazon SNS is the implementation of a <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/PublishSubscribeChannel.html">Publish-Subscribe</a> pattern and gives you not a rich feature set for solving marketing-related challenges. In any case, please be aware of the <a target="_blank" href="https://docs.aws.amazon.com/general/latest/gr/sns.html#limits_sns">limits and quotas</a>. Amazon SNS has several hard limits you have to keep in mind that influence your integration and architecture. Especially when it comes to Topic subscriptions and unsubscriptions.</p>
<p>🎯 Amazon Pinpoint on the other side is named a “Multichannel Marketing Communication Service” and is located in the “Business Applications” category. It is <strong>NOT</strong> just about sending messages over a given channel. <strong>It is about building business use cases for marketing and engagement over multiple channels.</strong> Hence Amazon Pinpoint provides a lot more features and capabilities than Amazon SNS. It is hard to describe the central implemented pattern. Viewed from the outside, the centerpiece of Amazon Pinpoint is the implementation of a <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/RecipientList.html">Recipient List</a> combined with a <a target="_blank" href="https://www.enterpriseintegrationpatterns.com/patterns/messaging/ProcessManager.html">Process Manager</a>. Looking more detailed, it is a composition of integration patterns to engage with your audience and solve a broad range of marketing business problems.</p>
<p>You can achieve the same, but you have to take a look from different perspectives to find the right service for the right job.</p>
<h3 id="heading-wrap-up">Wrap up</h3>
<p>Both services - Amazon SNS and Amazon Pinpoint - are capable of sending push notifications. But they have different intentions, tradeoffs and strengths. <strong>You can start with Amazon SNS if you just want to send out a (transactional) message.</strong> As soon as you move into more advanced marketing and engagement scenarios and have the requirement of segmentation and personalized messages, you will hit limitations using Amazon SNS. How do you measure the impact of your notifications? How do you know that your messages targeted the right fans? Keep in mind that using Amazon SNS will force you to build a lot of custom stuff around the pure notification part.</p>
<p>With Amazon Pinpoint you will make the shift from a pure message-focused approach to a real marketing and engagement-focused approach. <strong>Amazon Pinpoint enables you to build omnichannel customer experiences that go beyond just pure messaging.</strong> The services give you features to define your audience, create dynamic segments, and target your audience while giving you options to measure the impact of your marketing efforts. This enables you to put your fans at the center of the engagement and not just the raw message. It also enables marketers options to create segments based on recent trends.</p>
<p>And what about measuring your KPIs? Amazon Pinpoint comes with a whole analytics integration and provides features out of the box to analyze your campaign performance and important engagement-related KPIs. Have you ever tried this with Amazon SNS? <strong>I can tell you: it won't scale and you won't close</strong> <a target="_blank" href="https://segment.com/pdfs/State-of-Personalization-Report-Twilio-Segment-2022.pdf"><strong>the omnichannel gap</strong></a><strong>!</strong></p>
<hr />
<p>You are interested to know more about how we at Bundesliga are able to <strong>send out a goal notification to a hundred thousand fans in real time during a match</strong>? Reach out to me and I am happy to present you <strong>bits, bytes and insights</strong> about our journey leveraging Amazon Pinpoint for fan engagement.</p>
]]></content:encoded></item><item><title><![CDATA[AWS Step Function vs. AWS Lambda  benchmark - Part 2]]></title><description><![CDATA[🥊 It is time for a battle again. After I published the first part of my comparison, I was overwhelmed by the amount of feedback I received. May it be comments on my post, or discussions on Twitter or LinkedIn.
The fact that the initial post triggere...]]></description><link>https://cremich.cloud/aws-step-function-vs-aws-lambda-benchmark-part-2</link><guid isPermaLink="true">https://cremich.cloud/aws-step-function-vs-aws-lambda-benchmark-part-2</guid><category><![CDATA[serverless]]></category><category><![CDATA[AWS]]></category><category><![CDATA[aws lambda]]></category><category><![CDATA[AWS Step Functions]]></category><category><![CDATA[Benchmark]]></category><dc:creator><![CDATA[Christian Bonzelet]]></dc:creator><pubDate>Thu, 29 Dec 2022 21:05:51 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/stock/unsplash/ae2a590b27124f8b17c02fd36a9216f4.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>🥊 It is time for a battle again. After I published <a target="_blank" href="https://cremich.cloud/aws-step-function-vs-aws-lambda-benchmark">the first part</a> of my comparison, I was overwhelmed by the amount of feedback I received. May it be comments on my post, or discussions on Twitter or LinkedIn.</p>
<p>The fact that the initial post triggered a lot of inspiring discussions is very valuable. While reading through your feedback it was kind of obvious that there is a need for a second part.</p>
<p>I received a lot of feedback about optimizations for AWS Lambda and people are curious about how this affects the performance in comparison to our state machine. We will also take a closer look at the perspective of costs to get a more complete view of how the services differ - here we are.</p>
<p>Like in our first part, again all experiments are triggered using <a target="_blank" href="https://httpd.apache.org/docs/2.4/programs/ab.html">Apache Bench</a> with the following parameters.</p>
<pre><code class="lang-bash">ab -n 15000 -c 1 https://hash.execute-api.eu-central-1.amazonaws.com/.../
</code></pre>
<p><code>-n</code> configures the total amount of requests that are triggered - in our case 15.000 <code>-c</code> is the number of concurrent requests - in our setup 1</p>
<p>⚠️ <strong>IMPORTANT:</strong> it is important to consider, that the results from apache-bench are not 100% accurate. The measured throughput depends on the hardware and network capabilities of my local workstation. For upcoming benchmarks, I consider using something like CloudShell. But apache-bench gives some very early feedback and potential indications. Hence we use these results in combination with the Lambda duration and Step-Function execution duration.</p>
<h2 id="heading-optimizing-our-lambda-function">🔋 Optimizing our Lambda function</h2>
<p>So what is the goal of our upcoming experiments? We want to apply some optimizations to our Lambda function with a clear focus to decrease latencies. Based on the feedback I got, there were two main approaches for optimization:</p>
<ol>
<li><p>Reusing downstream HTTP connections by activating keep-alive settings.</p>
</li>
<li><p>Improving overall execution performance by increasing the allocated memory.</p>
</li>
</ol>
<h3 id="heading-reusing-connections-with-keep-alive-in-nodejs">Reusing Connections with Keep-Alive in Node.js</h3>
<p>For short-lived operations, such as in our case writing and reading to and from S3, the latency overhead of setting up a TCP connection might be greater than the operation itself. To activate HTTP keep-alive you simply have to set an environment variable in your Lambda function configuration.</p>
<pre><code class="lang-plaintext">Environment:
  Variables:
    AWS_NODEJS_CONNECTION_REUSE_ENABLED: 1
</code></pre>
<p>In case you already use v3 of the AWS JS SDK, this setting is <a target="_blank" href="https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/node-reusing-connections.html">enabled by default</a>. For v2 you have to <a target="_blank" href="https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/node-reusing-connections.html">explicitly activate it</a>.</p>
<p>Let us deploy the change and start our first test. Let us first start with analyzing the Apache Bench reports. The complete reporting is available on <a target="_blank" href="https://github.com/cremich/aws-sf-lambda-benchmark/tree/main/benchmark">GitHub</a>. Here are some highlights:</p>
<ul>
<li><p>The Lambda function was able to process all requests 43 seconds faster compared to the state machine.</p>
</li>
<li><p>Both the state machine and the Lambda function were able to process round about 7 requests per second</p>
</li>
<li><p>The mean time per request for the Lambda function was 131ms and 134ms for the state machine.</p>
</li>
</ul>
<p>Looking at these results, this little tweak of activating TCP keep-alive helped a lot to speed up the Lambda function. In terms of end-2-end performance and latency, both solutions are now very close to each other.</p>
<p>Let us take a closer look into CloudWatch and X-Ray to confirm the observations.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aysa1t8hlox1s73vb6gd.png" alt="latencies with keep-alive" /></p>
<p>The average execution time of the state machine is 46.4ms and Lambda performs at 49ms.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nt9uhcwylc7ujpjnr8ow.png" alt="x-ray service map with keep alive" /></p>
<p>Here things are still looking interesting. The Lambda function duration on average still has some ups and downs during the execution of the test while the duration of the state-machine is stable. Both solutions show some cold-start behavior while it seems that the state machine needs less time to become "warm".</p>
<p>But in total the impact on the Lambda function performance is very impressive compared to the results in the first part.</p>
<h3 id="heading-give-the-lambda-function-some-ram">Give the Lambda function some RAM</h3>
<p>But the question is: how much memory does my Lambda function need? The range is quite large from 128 MB to 10.240 MB. There is an awesome open-source tool called "<a target="_blank" href="https://github.com/alexcasalboni/aws-lambda-power-tuning">Lambda Power Tuner</a>" that helps you to determine your memory settings based on different strategies like speed, cost or balanced.</p>
<blockquote>
<p>If you use "cost" the state machine will suggest the cheapest option (disregarding its performance), while if you use "speed" the state machine will suggest the fastest option (disregarding its cost). When using "balanced" the state machine will choose a compromise between "cost" and "speed"</p>
</blockquote>
<p>Source: <a target="_blank" href="https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:451282441545:applications~aws-lambda-power-tuning">Lambda Power Tuner @ AWS Serverless Application Repository</a></p>
<p>In my case, the "Lambda Power Tuner" suggested 256 MB as the "Best cost" and 2048 MB as the "Best Time".</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i0v4xxyzdlvi7x79gfvc.png" alt="lambda-power-tuner-output" /></p>
<p>Awesome, now we have a good start for the final tests.</p>
<h4 id="heading-best-time-setting">Best time setting</h4>
<p>As we aim to reduce latency, let us first start with the proposed "Best Time" setting of 2048 MB memory and let us have a look at the apache-bench metrics:</p>
<ul>
<li><p>The Lambda function was able to process all requests 81 seconds faster compared to the state machine.</p>
</li>
<li><p>Both the state machine and the Lambda function were able to process round about 8 requests per second</p>
</li>
<li><p>The mean time per request for the Lambda function was 121ms and 127ms for the state machine.</p>
</li>
</ul>
<p>Compared to our first test, there is some improvement but it seems to be marginal on average. Let us try to get some more insights using CloudWatch and X-Ray.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1mf2qilnwy8s087eeldh.png" alt="cloudwatch-latencies-2048" /></p>
<p>For the most parts, the duration of the Lambda function is just below the execution time of the state machine. The average execution time of the state machine is 45.1ms and Lambda shines with 41.8ms.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ma1m8l4tv1x4m9emw9q4.png" alt="xray-service-map-2048" /></p>
<p>What would happen, if we set our memory configuration to the setting considered as "Best cost"? Let us review the results in the next chapter.</p>
<h4 id="heading-best-cost-setting">Best cost setting</h4>
<p>In short again our apache-bench metrics:</p>
<ul>
<li><p>The Lambda function was able to process all requests 155 seconds faster compared to the state machine.</p>
</li>
<li><p>The state machine was able to process 7.5 requests per second while the Lambda function processes 8 requests per second</p>
</li>
<li><p>The mean time per request for the Lambda function was 122ms and 132ms for the state machine.</p>
</li>
</ul>
<p>CloudWatch and X-Ray results also confirm very close results.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlbnp90x0645ljlc2jjq.png" alt="cloudwatch-256" /></p>
<p>The average execution time of the state machine is 54.8ms and Lambda is just in the lead with 50.5ms.</p>
<p><img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xjifyz7mgjxujcivhhse.png" alt="xray-256" /></p>
<h2 id="heading-cost-comparison">💰 Cost comparison</h2>
<p>Based on the scale of my test, the AWS Cost Explorer was not really helpful as the load I generated was too low. The AWS calculator is a helpful tool to better compare the costs of both services.</p>
<p>The estimate is <a target="_blank" href="https://calculator.aws/#/estimate?id=16d3b9fd0f064aac0f7d743fb47ad2b2044ad91e">publicly available</a> if you want to have a detailed look.</p>
<p>I calculated 5 million invocations per month per service. Based on our test results, I was able to determine very precise values for the parameter that influences pricing like Lambda invocation duration/state-machine execution or consumed memory. The monthly costs are:</p>
<ul>
<li><p>8 USD for AWS Lambda with 2048MB memory (Best time)</p>
</li>
<li><p>1.83 USD for AWS Lambda with 265MB memory (Best cost)</p>
</li>
<li><p>5.52 USD for the AWS Step Function express workflow</p>
</li>
</ul>
<h2 id="heading-conclusion">💡 Conclusion</h2>
<p>In this part, we covered some important aspects like options to improve the performance of a Lambda function. I think it is again very important to mention, that this benchmark should not be interpreted as "use Step Functions whenever you can".</p>
<p>My goal was more to raise discussions about the importance of not building your decision based on hypotheses or rumors. Make your decision based on data to make the best of all kinds of decisions you can make.</p>
<p>I would again like to point out a quote from <a target="_blank" href="https://twitter.com/edjgeek">Eric Johnson</a> at <a target="_blank" href="https://www.youtube.com/watch?v=zdmCYPvOHoo">serverless office hours</a>:</p>
<blockquote>
<p>Use Lambda to transform not to transport</p>
</blockquote>
<p>Or in my words: the best code is the code that is never written.</p>
<p>☝️ And here comes the thing and this is very important to keep in mind:</p>
<p>BOTH SERVICES ARE AWESOME.</p>
<p>If you need to write a Lambda function, you will be able to solve a lot of problems. But depending on what you want to achieve, AWS Step Functions give you a lot of power to get the same results without writing ANY line of code, while making up your mind about things like TCP keep-alive or how to figure out what the best memory setting is. In all tests, AWS Lambda showed the well-known cold-start behavior that is something you should keep in mind. AWS Step Function also needs some warm-up time but it is not comparable to AWS Lambda cold-starts. There was an interesting discussion around this on Twitter:</p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://twitter.com/diegosantiviago/status/1453733187666857985">https://twitter.com/diegosantiviago/status/1453733187666857985</a></div>
<p> </p>
<p>It only remains to say: happy coding AND happy orchestrating! 🥳 I hope that my analysis and approach to decision-making help you in deciding towards or against one of these services for your individual use cases.</p>
]]></content:encoded></item></channel></rss>