Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace
In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
Attendees will need the following to do the hands-on labs:
Learning and understanding AI concepts is satisfying and rewarding, but the fun part is learning how to work with AI yourself. In this 1/2 day workshop, author, trainer, and experienced technologist Brent Laster will help you do both! We’ll explain why and how to run AI models locally, the basic ideas of agents and RAG, and show how to assemble a simple AI agent in Python that leverages RAG and uses a local model through Ollama. And you'll get to follow through with hands-on labs and produce your own instance running on your system in a GitHub Codespace
In this workshop, we'll walk you through what it means to run models locally, how to interact with them, and how to use them as the brain for an agent. Then, we'll enable them to access and use data from a PDF via retrieval-augmented generation (RAG) to make the results more relevant and meaningful. And you'll do all of this hands-on in a ready-made environment with no extra installs required.
No experience is needed on these technologies, although we do assume you do have a basic understanding of LLMs.
Attendees will need the following to do the hands-on labs:
MCP, or Model Context Protocol, is a standardized framework that allows AI agents to seamlessly connect with external data sources, APIs, and tools. Its main purpose is to make AI agents more intelligent and context-aware by giving them real-time access to live information and actionable capabilities beyond their built-in knowledge.
Join AI technologist, author, and trainer Brent Laster as we learn what MCP is, how it works, and how it can be used to create AI agents that can work with any process that implements MCP. You'll work with MCP concepts, coding, servers, etc. through hands-on labs that teach you how to use it with AI agents.
With MCP, developers can easily integrate AI agents with a wide variety of systems, from internal business databases to third-party services, without having to build custom integrations for each use case. MCP servers act as gateways, exposing specific actions and knowledge to the AI agent, which can then dynamically discover and use these capabilities as needed. This approach streamlines the process of adding new functionalities to AI agents and reduces ongoing maintenance.
MCP is particularly useful for scenarios where AI agents need up-to-date information or need to perform actions in external systems-such as customer support bots fetching live ticket data, enterprise assistants accessing knowledge bases, or automation agents processing transactions. By leveraging MCP, organizations can create more adaptable, powerful, and enterprise-ready AI solutions that respond to real-world business needs in real time
Attendees will need the following to do the hands-on labs:
MCP, or Model Context Protocol, is a standardized framework that allows AI agents to seamlessly connect with external data sources, APIs, and tools. Its main purpose is to make AI agents more intelligent and context-aware by giving them real-time access to live information and actionable capabilities beyond their built-in knowledge.
Join AI technologist, author, and trainer Brent Laster as we learn what MCP is, how it works, and how it can be used to create AI agents that can work with any process that implements MCP.
With MCP, developers can easily integrate AI agents with a wide variety of systems, from internal business databases to third-party services, without having to build custom integrations for each use case. MCP servers act as gateways, exposing specific actions and knowledge to the AI agent, which can then dynamically discover and use these capabilities as needed. This approach streamlines the process of adding new functionalities to AI agents and reduces ongoing maintenance.
MCP is particularly useful for scenarios where AI agents need up-to-date information or need to perform actions in external systems-such as customer support bots fetching live ticket data, enterprise assistants accessing knowledge bases, or automation agents processing transactions. By leveraging MCP, organizations can create more adaptable, powerful, and enterprise-ready AI solutions that respond to real-world business needs in real time
This condensed hands-on session provides developers and technical leaders with a practical foundation in AI system security — from understanding the unique attack surfaces of LLMs and agents to applying effective guardrails, validation, and monitoring.
Participants explore key security principles across LLM pipelines, agent architectures, and Model Context Protocol (MCP) environments.
Through five focused labs, attendees learn how to detect vulnerabilities, prevent data leakage, and implement safe execution patterns for AI-driven workflows.
By the end of the session, participants will have a working understanding of common AI attack vectors, defensive design patterns, and secure deployment practices for agents and MCP-based systems.
The workshop combines rapid conceptual overviews with practical, short labs:
1.Lab 1 – Understanding AI Threat Surfaces
Explore how AI systems differ from traditional apps: prompt injection, training data poisoning, model exfiltration, and output manipulation.
2.Lab 2 – Secure Prompt and Context Handling
Implement techniques for input sanitization, instruction filtering, and chain-of-thought isolation in LLM and agent pipelines.
3.Lab 3 – Guardrails and Policy Enforcement
Apply open-source guardrail frameworks (e.g., Guardrails.ai or LlamaGuard) to validate responses and prevent unsafe completions.
4.Lab 4 – Securing Agent Tool Use
Configure tools and connectors with least-privilege access and safe error handling. Examine how to restrict and audit agent actions.
5.Lab 5 – Securing MCP Interactions
Learn how to authenticate, authorize, and scope MCP server calls. Practice securing endpoints and preventing untrusted tool injection.
Outcome:
Participants leave with an actionable framework for assessing AI application risk, implementing safeguards, and integrating secure development practices into their LLM and agent workflows.
This condensed hands-on session provides developers and technical leaders with a practical foundation in AI system security — from understanding the unique attack surfaces of LLMs and agents to applying effective guardrails, validation, and monitoring.
Participants explore key security principles across LLM pipelines, agent architectures, and Model Context Protocol (MCP) environments.
Through five focused labs, attendees learn how to detect vulnerabilities, prevent data leakage, and implement safe execution patterns for AI-driven workflows.
By the end of the session, participants will have a working understanding of common AI attack vectors, defensive design patterns, and secure deployment practices for agents and MCP-based systems.
The workshop combines rapid conceptual overviews with practical, short labs:
1.Lab 1 – Understanding AI Threat Surfaces
Explore how AI systems differ from traditional apps: prompt injection, training data poisoning, model exfiltration, and output manipulation.
2.Lab 2 – Secure Prompt and Context Handling
Implement techniques for input sanitization, instruction filtering, and chain-of-thought isolation in LLM and agent pipelines.
3.Lab 3 – Guardrails and Policy Enforcement
Apply open-source guardrail frameworks (e.g., Guardrails.ai or LlamaGuard) to validate responses and prevent unsafe completions.
4.Lab 4 – Securing Agent Tool Use
Configure tools and connectors with least-privilege access and safe error handling. Examine how to restrict and audit agent actions.
5.Lab 5 – Securing MCP Interactions
Learn how to authenticate, authorize, and scope MCP server calls. Practice securing endpoints and preventing untrusted tool injection.
Outcome:
Participants leave with an actionable framework for assessing AI application risk, implementing safeguards, and integrating secure development practices into their LLM and agent workflows.
In this intensive 3-hour hands-on workshop, you'll learn to master the art and science of prompt engineering. Learn systematic frameworks for constructing effective prompts, from foundational elements to cutting-edge techniques including multi-expert prompting, probability-based optimization, and incentive framing. Through five progressive labs using Ollama and llama3.2:3b in GitHub Codespaces, you'll build production-ready templates and see quality improvements in real-time. Leave with immediately applicable techniques, reusable prompt patterns, and a decision framework for selecting the right approach for any AI task.
Modern AI systems deliver many capabilities, but their effectiveness depends entirely on how well they're prompted. This intensive workshop transforms prompt engineering from trial-and-error guesswork into a systematic, measurable discipline. You'll learn proven frameworks for constructing effective prompts and learn cutting-edge optimization techniques that deliver quality improvements in real-world applications.
Through five hands-on labs in GitHub Codespaces, you'll work with Ollama hosting llama3.2:3b to implement each technique, measure its impact, and build reusable templates. Every concept is immediately validated with code you can deploy tomorrow.
What You'll Master
The workshop progresses through five core competency areas, each reinforced with a practical lab:
Foundations of Effective Prompting begins with the six essential elements every prompt needs: task definition, context, constraints, role assignment, output format, and examples. You'll systematically transform a poorly-constructed prompt into an optimized version, measuring quality improvements at each step. This foundation eliminates the guesswork and establishes a repeatable framework for all future prompt engineering work.
Pattern-Based Techniques introduces few-shot learning and Chain of Thought (CoT) reasoning. Few-shot prompting teaches models through examples rather than explanations, dramatically improving consistency on classification and transformation tasks. Chain of Thought makes reasoning transparent, improving accuracy on complex problems by 20-40% while enabling you to verify the model's logic. You'll build a classification system and compare zero-shot, few-shot, and CoT approaches with measurable accuracy metrics.
Advanced Structural Techniques combines role-based prompting, structured outputs, and constrained generation into enterprise-ready patterns. You'll create an API documentation generator that uses expert personas, enforces strict formatting requirements, outputs reliable JSON, and maintains 90%+ consistency across diverse inputs. This lab produces production templates with automated validation—patterns you can immediately deploy in your organization.
Cutting-Edge Methods explores two powerful techniques gaining traction in 2025-2026. Multi-expert prompting simulates a council of experts (technical, business, security) analyzing complex decisions from multiple perspectives, catching blind spots that single-perspective prompts miss. Reverse prompting flips the traditional interaction: instead of you trying to perfectly specify requirements, the AI asks clarifying questions to discover what you really need. You'll measure 40-60% improvements in decision quality and 80-90% gains in requirement clarity.
Probabilistic and Incentive-Based Optimization introduces the latest research-backed techniques for extracting maximum quality from language models. Stanford's breakthrough probability-based prompting—requesting multiple responses with confidence scores—improves reliability by 30-50% on ambiguous tasks. Incentive framing (yes, “This is critical” and “Take your time” actually work) increases thoroughness by 20-40%. Combined, these techniques deliver 50-70% quality improvements on high-stakes decisions.
In this intensive 3-hour hands-on workshop, you'll learn to master the art and science of prompt engineering. Learn systematic frameworks for constructing effective prompts, from foundational elements to cutting-edge techniques including multi-expert prompting, probability-based optimization, and incentive framing. Through five progressive labs using Ollama and llama3.2:3b in GitHub Codespaces, you'll build production-ready templates and see quality improvements in real-time. Leave with immediately applicable techniques, reusable prompt patterns, and a decision framework for selecting the right approach for any AI task.
Modern AI systems deliver many capabilities, but their effectiveness depends entirely on how well they're prompted. This intensive workshop transforms prompt engineering from trial-and-error guesswork into a systematic, measurable discipline. You'll learn proven frameworks for constructing effective prompts and learn cutting-edge optimization techniques that deliver quality improvements in real-world applications.
Through five hands-on labs in GitHub Codespaces, you'll work with Ollama hosting llama3.2:3b to implement each technique, measure its impact, and build reusable templates. Every concept is immediately validated with code you can deploy tomorrow.
What You'll Master
The workshop progresses through five core competency areas, each reinforced with a practical lab:
Foundations of Effective Prompting begins with the six essential elements every prompt needs: task definition, context, constraints, role assignment, output format, and examples. You'll systematically transform a poorly-constructed prompt into an optimized version, measuring quality improvements at each step. This foundation eliminates the guesswork and establishes a repeatable framework for all future prompt engineering work.
Pattern-Based Techniques introduces few-shot learning and Chain of Thought (CoT) reasoning. Few-shot prompting teaches models through examples rather than explanations, dramatically improving consistency on classification and transformation tasks. Chain of Thought makes reasoning transparent, improving accuracy on complex problems by 20-40% while enabling you to verify the model's logic. You'll build a classification system and compare zero-shot, few-shot, and CoT approaches with measurable accuracy metrics.
Advanced Structural Techniques combines role-based prompting, structured outputs, and constrained generation into enterprise-ready patterns. You'll create an API documentation generator that uses expert personas, enforces strict formatting requirements, outputs reliable JSON, and maintains 90%+ consistency across diverse inputs. This lab produces production templates with automated validation—patterns you can immediately deploy in your organization.
Cutting-Edge Methods explores two powerful techniques gaining traction in 2025-2026. Multi-expert prompting simulates a council of experts (technical, business, security) analyzing complex decisions from multiple perspectives, catching blind spots that single-perspective prompts miss. Reverse prompting flips the traditional interaction: instead of you trying to perfectly specify requirements, the AI asks clarifying questions to discover what you really need. You'll measure 40-60% improvements in decision quality and 80-90% gains in requirement clarity.
Probabilistic and Incentive-Based Optimization introduces the latest research-backed techniques for extracting maximum quality from language models. Stanford's breakthrough probability-based prompting—requesting multiple responses with confidence scores—improves reliability by 30-50% on ambiguous tasks. Incentive framing (yes, “This is critical” and “Take your time” actually work) increases thoroughness by 20-40%. Combined, these techniques deliver 50-70% quality improvements on high-stakes decisions.
Java has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
sealed classesrecordswitch expressionsJava has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
sealed classesrecordswitch expressionsJava has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
Java has quietly grown into a more expressive, flexible, and modern language — but many developers haven’t kept up with the latest features. This two-part workshop explores the most useful additions to Java from recent releases, with hands-on examples and real-world scenarios.
Whether you’re still catching up from Java 8 or already using Java 21+, this series will give you a practical edge in writing cleaner, more modern Java code.
In this half-day workshop, we’ll practice Test-Driven Development (TDD) by solving a real problem step by step. You’ll learn how to think in tests, write clean code through refactoring, and use your IDE and AI tools effectively. We’ll also explore how modern Java features (like lambdas and streams) enhance testability, and discuss what’s worth testing — and what’s not.
In this half-day workshop, we’ll practice Test-Driven Development (TDD) by solving a real problem step by step. You’ll learn how to think in tests, write clean code through refactoring, and use your IDE and AI tools effectively. We’ll also explore how modern Java features (like lambdas and streams) enhance testability, and discuss what’s worth testing — and what’s not.
Java has accumulated a diverse toolbox for concurrency and asynchrony over the decades, ranging from classic threads to parallel streams, from Future to CompletableFuture, and from reactive libraries to the latest innovations, including virtual threads, structured concurrency, and the Vector API. But with so many options, the question is: which ones should we use today, which still matter, and which belong in the history books?
In this talk, we’ll explore the entire spectrum:
We’ll also tackle the hard questions:
Prometheus and Grafana form the backbone of modern metrics-based observability, yet many teams struggle to move from “we collect metrics” to “we understand our systems.”
This talk builds a clear mental model for Prometheus and Grafana: how metrics are exposed, scraped, stored, queried, and visualized — and how those metrics connect to real operational decisions. We’ll explore Prometheus architecture, PromQL, Kubernetes integration via the Prometheus Operator, and how metrics power advanced workflows like canary deployments with Argo Rollouts and OpenTelemetry-based telemetry.
Attendees will leave knowing what to measure, how to measure it, and where to start on Monday.
This talk builds a practical mental model for metrics-based observability using Prometheus and Grafana. Rather than focusing solely on dashboards, we’ll explore how metrics are exposed, collected, queried, and ultimately used to make real operational decisions. We’ll connect application-level instrumentation, Kubernetes-native monitoring, and modern telemetry standards, showing how Prometheus fits into today’s production environments and deployment workflows.
Microservices architecture has become a buzzword in the tech industry, promising unparalleled agility, scalability, and resilience. Yet, according to Gartner, more than 90% of organizations attempting to adopt microservices will fail. How can you ensure you're part of the successful 10%?
Success begins with looking beyond the superficial topology and understanding the unique demands this architectural style places on the teams, the organization, and the environment. These demands must be balanced against the current business needs and organizational realities while maintaining a clear and pragmatic path for incremental evolution.
In this session, Michael will share some real-world examples, practical insights, and proven techniques to balance both the power and complexities of microservices. Whether you're considering adopting microservices or already on the journey and facing challenges, this session will equip you with the knowledge and tools to succeed.
As code generation becomes increasingly automated, our role as developers and architects is evolving. The challenge ahead isn’t how to get AI to write more code, it’s how to guide it toward coherent, maintainable, and purposeful systems.
In this session, Michael Carducci reframes software architecture for the era of intelligent agents. You’ll learn how architectural constraints, composition, and trade-offs provide the compass for orchestrating AI tools effectively. Using principles from the Tailor-Made Architecture Model, Carducci introduces practical mental models to help you think architecturally, communicate intent clearly to your agents, and prevent automation from accelerating entropy. This talk reveals how the enduring discipline of architecture becomes the key to harnessing AI—not by replacing human creativity, but by amplifying it.
REST APIs often fall into a cycle of constant refactoring and rewrites, leading to wasted time, technical debt, and endless rework. This is especially difficult when you don't control the API clients.
But what if this could be your last major API refactor? In this session, we’ll dive into strategies for designing and refactoring REST APIs with long-term sustainability in mind—ensuring that your next refactor sets you up for the future.
You’ll learn how to design APIs that can adapt to changing business requirements and scale effectively without requiring constant rewrites. We’ll explore principles like extensibility, versioning, and decoupling, all aimed at future-proofing your API while keeping backward compatibility intact. Along the way, we’ll examine real-world examples of incremental API refactoring, where breaking the cycle of endless rewrites is possible.
This session is perfect for API developers, architects, and tech leads who are ready to stop chasing their tails and want to invest in designing APIs that will stand the test of time—so they can focus on building great features instead of constantly rewriting code.
Architectural decisions are often influenced by blindspots, biases, and unchecked assumptions, which can lead to significant long-term challenges in system design. In this session, we’ll explore how these cognitive traps affect decision-making, leading to architectural blunders that could have been avoided with a more critical, holistic approach.
You’ll learn how common biases—such as confirmation bias and anchoring—can cloud judgment, and how to counteract them through problem-space thinking and reflective feedback loops. We’ll dive into real-world examples of architectural failures caused by biases or narrow thinking, and discuss strategies for expanding your perspective and applying critical thinking to system design.
Whether you’re an architect, developer, or technical lead, this session will provide you with tools to recognize and mitigate the impact of biases and blindspots, helping you make more informed, thoughtful architectural decisions that stand the test of time.
AI models are evolving fast, but the systems around them aren’t. Every backend change still breaks your carefully tuned AI client, while on the web, every change to a server doesn’t require you to download a new browser. What if AI worked the same way?
In this talk, Michael Carducci explores the architecture of 3rd Generation Agentic AI, building on the ideas and technologies introduced in Data Architecture for AI. You’ll discover how JSON-LD, Hydra, and semantic integration enable truly evolvable, interoperable AI ecosystems at web scale. Through live demos and real-world examples, Carducci shows how these web-native standards create APIs that describe themselves, adapt to change, and empower agents to discover and interact safely without brittle coupling. The real frontier isn’t smarter models—it’s shared meaning—and that’s an architectural problem worth solving.
AI models are evolving fast, but the systems around them aren’t. Every backend change still breaks your carefully tuned AI client, while on the web, every change to a server doesn’t require you to download a new browser. What if AI worked the same way?
In this talk, Michael Carducci explores the architecture of 3rd Generation Agentic AI, building on the ideas and technologies introduced in Data Architecture for AI. You’ll discover how JSON-LD, Hydra, and semantic integration enable truly evolvable, interoperable AI ecosystems at web scale. Through live demos and real-world examples, Carducci shows how these web-native standards create APIs that describe themselves, adapt to change, and empower agents to discover and interact safely without brittle coupling. The real frontier isn’t smarter models—it’s shared meaning—and that’s an architectural problem worth solving.
Everyone’s talking about AI models, but almost no one is talking about the data architecture that makes them intelligent. Today’s AI systems are brittle because they lack context, semantics, and shared understanding. In this session, Michael Carducci explores how linked data, RDF, ontologies, and knowledge graphs solve the very problems that leave the industry floundering: hallucination, inconsistency, and lack of interoperability.
Drawing from real-world examples, Carducci connects decades of overlooked research in semantic web technologies to the challenges of modern AI and agentic systems. You’ll see how meaning itself can be modeled, linked, and reasoned over; and why the future of AI depends not on bigger models, but on smarter data.
Everyone’s talking about AI models, but almost no one is talking about the data architecture that makes them intelligent. Today’s AI systems are brittle because they lack context, semantics, and shared understanding. In this session, Michael Carducci explores how linked data, RDF, ontologies, and knowledge graphs solve the very problems that leave the industry floundering: hallucination, inconsistency, and lack of interoperability.
Drawing from real-world examples, Carducci connects decades of overlooked research in semantic web technologies to the challenges of modern AI and agentic systems. You’ll see how meaning itself can be modeled, linked, and reasoned over; and why the future of AI depends not on bigger models, but on smarter data.
Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues.
In addition to detecting the presence of common bugs as we have done with static analysis for years, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite and the Moderne cli to fix common security issues and keep them from coming back.
In this talk we will focus on:
Security problems empirically fall into two categories: bugs and flaws. Roughly half of the problems we encounter in the wild are bugs and about half are design flaws. A significant number of the bugs can be found through automated testing tools which frees you up to focus on the more pernicious design issues.
In addition to detecting the presence of common bugs as we have done with static analysis for years, however, we can also imagine automating the application of corrective refactoring. In this talk, I will discuss using OpenRewrite and the Moderne cli to fix common security issues and keep them from coming back.
In this talk we will focus on:
If you ask the typical technologist how to build a secure system, they will include encryption in the solution space. While this is a crucial security feature, in and of itself, it is an insufficient part of the plan. Additionally, there are a hundred ways it could go wrong. How do you know if you're doing it right? How do you know if you're getting the protections you expect?
Encryption isn't a single thing. It is a collection of tools combined together to solve problems of secrecy, authentication, integrity, and more. Sometimes those tools are deprecated because they no longer provide the protections that they once did.Technology changes. Attacks change. Who in your organization is tracking and validating your encryption strategy? How are quantum computing advancements going to change the game?No background will be assumed and not much math will be shown.
If you ask the typical technologist how to build a secure system, they will include encryption in the solution space. While this is a crucial security feature, in and of itself, it is an insufficient part of the plan. Additionally, there are a hundred ways it could go wrong. How do you know if you're doing it right? How do you know if you're getting the protections you expect?
Encryption isn't a single thing. It is a collection of tools combined together to solve problems of secrecy, authentication, integrity, and more. Sometimes those tools are deprecated because they no longer provide the protections that they once did.Technology changes. Attacks change. Who in your organization is tracking and validating your encryption strategy? How are quantum computing advancements going to change the game?No background will be assumed and not much math will be shown.
If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, columnoriented, document, temporal, appendonly, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.
This will be an overview of the benefits of vectors databases as well as an introduction to the major players.
We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.
We will cover:
If you are getting tired of the appearance of new types of databases… too bad. We are increasingly relying on a variety of data storage and retrieval systems for specific purposes. Data does not have a single shape and indexing strategies that work for one are not necessarily good fits for others. So after hierarchical, relational, object, graph, columnoriented, document, temporal, appendonly, and everything else, get ready for Vector Databases to assist in the systematization of machine learning systems.
This will be an overview of the benefits of vectors databases as well as an introduction to the major players.
We will focus on open source versus commercial players, hosted versus local deployments, and the attempts to add vector search capabilities to existing storage systems.
We will cover:
There's an implied context to your software running in the world and processing data. The problem is that it is usually a reductive and insufficient context to capture the fluency of change that occurs at multiple layers. This need for shared context spreads to API usage which often necessitates fragile, custom development.
In this talk we will address the importance of dynamic context in software systems and how to engender flexible, sufficiently rich context-based systems.
We will cover the history of context-based thinking in the design of software systems and network protocols and how the ideas are merging into something along the lines of “Information DNS” where we resolve things at the time and place of execution into the form in which we need it.
Consider software systems with the technical and financial properties of the Web.
While this is a developing approach to software development, it builds on established ideas and will help provide the basis for next-generation development.
One of the nice operational features of the REST architectural style as an approach to API Design is that is allows for separate evolution of the client and server. Depending on the design choices a team makes, however, you may be putting a higher burden on your clients than you intend when you introduce breaking changes.
By taking advantage of the capabilities of OpenRewrite, we can start to manage the process of independent evolution while minimizing the impact. Code migration and refactoring can be used to transition existing clients away from older or deprecated APIs and toward new versions with less effort than trying to do it by hand.
In this talk we will focus on:
Managing API lifecycle changes by automating the migration from deprecated to supported APIs.
Discussing API evolution strategies and when they require assisted refactoring and when they don’t.
*Integrating OpenRewrite into API-first development to ensure client code is always up-to-date with ease.