In recent years the development of the Java language has accelerated significantly. Some of the more recent features are quite exciting and can lead to better modeling, fewer errors, less verbosity, and better developer productivity. In this presentation we will take a close look at these features and their significance. We will use concrete examples as use cases to explore the modern features. In addition to looking at these features we will also take a look under the hood at how these are implemented. This will help us to gain a better understanding of the builtin efficiencies these features come with.
Get ready to take part in some live coding as we dive into these amazing features.
We live in a world of microservices. Yet, what is a microservice? What defines the boundaries of a microservice? How do we define the relationships between microservices? Thankfully domaindriven design gives us the concepts and practices to better design and decompose our services.
In this session we will consider many of the concepts of DDD — How bounded contexts use Ubiquitous language to model the domain, how context maps can be used to establish the interconnections between services as well aggregates and domains events, all of which will service us well as we go about creating our microservices.
In this session we will go further with DDD, considering aspects like aggregates, domain events, factories and repositories, and consider how DDD can be represented in architecture, be that hexagonal, REST or RPC.
If you need a deeper dive into DDD, then this is the session for you.
Functional and reactive programming is gaining popularity and use. One hurdle developers face using these approaches is in exception handling. Dealing with exceptions in these styles is confusing in the beginning and is often messy and error prone. In this presentation we will step back and take a closer look at exception handling, about the functional and reactive programming and discuss the dos and don'ts for exception handling. We will learn using concrete examples and live coding to illustrate the problem and the possible solutions.
We will explore options available in different languages and libraries and learn from them
Have you looked at the methods of CompletableFuture and wondered how to actually use them? Do the function names appears strange or at least different from what we're used to. Let's take another look so those names become easier to remember and also to make use of.
In this presentation we will take a practical look at CompletableFuture and how to make good use of it, and how to deal with exceptions.
Albert Einstein once said — “We cannot solve our problems with the same thinking we used when we created them.”
As developers, architects, managers, tech-leads, our job entails solving big problems. However, our systems are composed of parts, and it is the interaction of those parts that ensure the system works, and tells us where to go looking when things fail. What does it mean to think in systems? What are the benefits, and what are tools are available to us? If any of this intrigues you (and it should) come join me as we explore a new way of thinking about problems, both big and small.
In this session we will take a page from Donella H. Meadows fantastic book “Thinking in Systems” and attempt to get an understanding of what makes up a system, how we can tease apart the moving parts of the system, and get into the mindset of a “systems thinker”.
In this example-driven session, we'll review several tips and tricks to make the most out of your Spring development experience. You'll see how to apply the best features of Spring and Spring Boot, including the latest and greatest features of Spring Framework 6.x and Spring Boot 3.x with an eye to what's coming in Spring 7 and Boot 4.
Spring has been the de facto standard framework for Java development for nearly two decades. Over the years, Spring has continued to evolve and adapt to meet the ever-changing requirements of software development. And for nearly half that time, Spring Boot has carried Spring forward, capturing some of the best Spring patterns as auto-configuration.
As with any framework or language that has this much history and power, there are just as many ways to get it right as there are to get it wrong. How do you know that you are applying Spring in the best way in your application?
You'll need…
In this example-driven presentation, we'll focus on how to build reactive APIs in Spring. We'll start with Spring WebFlux, a reactive reimagining of the popular Spring MVC framework for HTTP-based APIs. Then we'll have a look at RSocket, an intriguing new communication protocol that is reactive by design.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
In this example-driven presentation, we'll take a high-level look at how Spring applies reactive programming at all layers of an application. You'll also learn the essentials of working with Project Reactor, the reactive programming library for Java that Spring's reactive support builds upon.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
Apache Kafka is the core of an amazing ecosystem of tools and frameworks that enable us to get more value from our data. Let's take a journey through this wonderful world and see what we can learn.
In this session, we'll have a gentle introduction to Apache Kafka, and then a survey of some of the more popular components in the Kafka ecosystem. We will be looking at the Kafka Producer and Consumer libraries, Kafka Connect, Kafka Streams, the Confluent Schema Registry, and ksqlDB. You will leave loaded with ideas of ways to put Kafka to use in your organization, and a list of resources to help you on your journey.
Event-driven architectures look great on paper (or a whiteboard), but it's easy to run into snags when trying to implement them. In this talk, we'll see how tools from the Apache Kafka ecosystem can make this task easier and a lot more fun!
In this session, with Apache Kafka as the backbone, we'll use Micronaut to implement an event-driven architecture that is loosely coupled and easily extensible. Then we'll extend it with a Kafka Streams application without affecting any of the existing applications.
Some say that data is the new oil, and like oil, data needs to be collected, refined, and distributed to gain value from it. For years we've collected data into tankers called databases, and when we had enough, we shipped batches of it to be refined by some process. Then when that was done, we'd make it available to end-users. However, unlike oil, data tends to lose its value the longer we wait to use it. (Data is the new milk?)
So, what if we could hook up a pipeline* to the source of the oil, er… data, and transform it in near real time, on its way to the end-users? That's precisely what we can do with Apache Kafka and Kafka Streams. In this session, we’ll see how we can use Kafka Streams to enrich, aggregate, summarize, and otherwise transform our data while it is flowing through our systems.
Architecture is not a static representation of a system. There are several complexities and risks involved in creating them. One way to mitigate the risk is to evolve the architecture. But, there are risks of evolving as much as there are the risks of not evolving. In this interactive workshop we will explore a set of practices that we can use to mitigate the risks. Then we will dive into discussing some common and popular architectural patterns.
Finally, we will take some example applications and discuss how to evolve architecture to meet the needs of those applications.
Computer with git client installed to access the version control system which will have lab related material.
Architecture is not a static representation of a system. There are several complexities and risks involved in creating them. One way to mitigate the risk is to evolve the architecture. But, there are risks of evolving as much as there are the risks of not evolving. In this interactive workshop we will explore a set of practices that we can use to mitigate the risks. Then we will dive into discussing some common and popular architectural patterns.
Finally, we will take some example applications and discuss how to evolve architecture to meet the needs of those applications.
Computer with git client installed to access the version control system which will have lab related material.
There are some amazing features that are being introduced in Java in the coming years. Some of these features are just around the corner. However, these features, in various forms, have been in other languages for a while.
In this presentation we will take a look at some features in Kotlin, Scala, and JavaScript that are soon to be features of Java, understand their power and benefits and how we can make use of them. We will relate these features to how they are manifesting in Java.
The power of Java's functional programming goes far beyond lambdas and the filter, map of the Stream API. In this presentation we will look at various data transformations and see how the functional APIs in the JDK can provide concise solutions for these.
We will start with the Stream's API and quickly dive into some newer functions and also the utilities of the Collectors. Along the way you will get a deeper understanding of some of the excellent functions that are hidden in plain sight.
You have been using Git for a while. You know how to stage and commit your work, create and delete branches and collaborate with your team members using remotes. But Git often leaves your confused — ever committed to your work to the wrong branch? Even worse, ever accidentally delete a branch that you needed to keep around? And what is God's good name is “Detached HEAD state”? Why tag commits, when we have branches? Is there a better work-flow than just using merges? What's the difference between a merge and a rebase?
The answer to all of these questions, and more, lies in the constitution of a commit, and the directed acyclic graph (DAG) that Git uses to manage your history. This, right here, is the key to understanding everything in Git.
In this hands-on workshop, we will level up your Git skills. We will foray into the underbelly of Git, and reveal the mystery behind the arcane interface that is the Git CLI.
By the end of this workshop, you will have a keen understanding on how best to use Git, as well as know how to dig yourself any prickly situation you might find yourself in. You will become your team's hero(ine). Most importantly, you will walk away with a keen appreciation of how beautiful and elegant Git really is.
You have been using Git for a while. You know how to stage and commit your work, create and delete branches and collaborate with your team members using remotes. But Git often leaves your confused — ever committed to your work to the wrong branch? Even worse, ever accidentally delete a branch that you needed to keep around? And what is God's good name is “Detached HEAD state”? Why tag commits, when we have branches? Is there a better work-flow than just using merges? What's the difference between a merge and a rebase?
The answer to all of these questions, and more, lies in the constitution of a commit, and the directed acyclic graph (DAG) that Git uses to manage your history. This, right here, is the key to understanding everything in Git.
In this hands-on workshop, we will level up your Git skills. We will foray into the underbelly of Git, and reveal the mystery behind the arcane interface that is the Git CLI.
By the end of this workshop, you will have a keen understanding on how best to use Git, as well as know how to dig yourself any prickly situation you might find yourself in. You will become your team's hero(ine). Most importantly, you will walk away with a keen appreciation of how beautiful and elegant Git really is.
In this example-driven presentation, we'll focus on working with reactive data persistence. We'll start by seeing how to create reactive repositories for relational databases with Spring Data R2DBC. Then we'll explore non-relational reactive persistence for MongoDB and Cassandra.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
In this example-driven session, we're going to look at how to implement GraphQL in Spring. You'll learn how Spring for GraphQL builds upon GraphQL Java, recognize the use-cases that are best suited for GraphQL, and how to build a GraphQL API in Spring.
Typical REST APIs deal in resources. This is fine for many use cases, but it tends to be more rigid and less efficient in others.
For example, in an shopping API, it's important to weigh how much or how little information should be provided in a request for an order resource? Should the order resource contain only order specifics, but no details about the order's line items or the products in those line items? If all relevant details is included in the response, then it's breaking the boundaries of what the resource should offer and is overkill for clients that do not need it. On the other hand, proper factoring of the resource will require that the client make multiple requests to the API to fetch relevant information that they may need.
GraphQL offers a more flexible alternative to REST, setting aside the resource-oriented model and focusing more on what a client needs. Much as how SQL allows for data from multiple tables to be selected and joined in response to a query, GraphQL offers API clients the possibility of tailoring the response to provide all of the information needed and nothing that they do not need.
The web is arguably the single most impactful revolution in human history (to date). By agreeing on a simple set of standards, we have collectively unlocked all the world's information. Documents can be discovered, retrieved, published, and shared so easily we don't even think about it.
Data, on the other hand, is a different story. Our data remains stuck in the 1980s. Locked in silos, each with a different format, interface, and conventions that must be interpreted by a human, parsed, mapped, and converted. Data is at the heart of many problems we solve today, and we produce data exponentially faster than we can consume it.
Today I can request any document from any server on the web. I need to know nothing about the underlying technology the server uses, nothing about how the information is stored or retrieved, and consume it instantly. We've been evolving those same capabilities with data over the past 20 years and the standards, tools, and technologies are reaching critical mass. The linked data revolution is now one that you can no longer ignore. Join us to see what you've been missing.
Completely Rewritten for 2023
Although the Resource-Oriented Architecture is one of the oldest and most successful distributed architectures, it remains poorly understood and often completely overlooked today.
Much of the microservices architecture pattern is focused on taking applications apart although seemingly everyone has different ideas on how to put things back together again.
In this session, we will start with a summary of the resource-oriented architecture along with it's strengths and weaknesses. Michael will share hard-earned, real-world experience applying ROA concepts to complex microservice environments to successfully build an infinitely scalable, extensible, and understandable system.
Mob Programming is a style of programming in which the entire team sits together and
works on a single task at a time. Teams that have worked this way have found that
many of the problems that plague normal development just melted away, possibly because communication and learning increases. Teams also find that the quality of their code increases. They find their capacity to create increases. However, the best part of all this is that teams end up being happier and more cohesive.
In this session we introduce the core concepts of mob programming and then get handson mobbing on a coding kata.
Integration, once a luxury, is now a necessity. Doing this well, however, continues to be elusive. Early attempts to build better distributed systems such as DCOM, CORBA, and SOAP were widely regarded as failures. Today the focus is on REST, RPC, and graphql style APIs.
Which is best? The goto answer for architects is, of course, “it depends.”
In this session, we look at the various API approaches, how they attempt to deal with the challenge of decoupling client from server, evolvability, extensibility, adaptability, composability.
The biggest challenge is that needs change over time, and APIs must necessarily evolve. Versioning is challenging, and breaking changes are inevitable. You'll leave this session with a highlevel understanding of these approach, their respective tradeoffs and ultimately how to align your API approach with your architectural and organizational goals.
Continuous refactoring is critical to succeeding in projects and is an important part of sustainable agile development.
In this workshop, we will start by discussing how to approach refactoring, the essential steps we need to take, and look into how to incrementally improve the internal design of code to make it extensible, maintainable, and cost-effective to change. In addition to discussing the concepts, we will take several code examples from real projects, discuss the code smells and explore the refactoring techniques. Along the way, we will also dive into refactoring short code samples and measure the quality of code before and after refactoring.
Computer with git client to access git repository.
Java 8 or newer
Your favorite IDE
Continuous refactoring is critical to succeeding in projects and is an important part of sustainable agile development.
In this workshop, we will start by discussing how to approach refactoring, the essential steps we need to take, and look into how to incrementally improve the internal design of code to make it extensible, maintainable, and cost-effective to change. In addition to discussing the concepts, we will take several code examples from real projects, discuss the code smells and explore the refactoring techniques. Along the way, we will also dive into refactoring short code samples and measure the quality of code before and after refactoring.
Computer with git client to access git repository.
Java 8 or newer
Your favorite IDE
Kubernetes has become the de-facto orchestrator for containers and now is the best way to start engaging with portable distributed computing. This workshop is for software application developers who want to understand what Kubernetes is all about and how it works. It can be a seemingly complex ecosystem full of terms, architectures, and misinformation. We will break it down so you have a solid understanding of how it works so you can start writing applications that run on this distributed platform.
We will cover many topics such as:
Kubernetes has become the de-facto orchestrator for containers and now is the best way to start engaging with portable distributed computing. This workshop is for software application developers who want to understand what Kubernetes is all about and how it works. It can be a seemingly complex ecosystem full of terms, architectures, and misinformation. We will break it down so you have a solid understanding of how it works so you can start writing applications that run on this distributed platform.
We will cover many topics such as:
JavaScript now has a yearly release cycle. Every release introduces new features that aim to make us (JavaScript developers) more productive, while making our code succinct, expressive, and (potentially) bug-free.
By the end of this workshop you will be intimately familiar with the newer set of JavaScript features, and ready to dive into your next JavaScript project with confidence.
This workshop allows you to explore many of the new features that we use on a daily basis, including
let
and const
Map
/ Set
data-structuresclass
syntax as well as static/private members and methodsThis workshop uses tests to verify and validate all of your changes, so you can be confident in your changes as you go about refactoring your code to use modern JavaScript language features.
Each section of the workshop builds on the previous one, allowing you to continuously refactor code and get tons of practice.
If you want to be fully caught up with all of the changes that have landed in JavaScript between 2015 and 2021, this is the workshop you want to attend!
Please follow the instructions in the README.md file of https://github.com/looselytyped/refactoring-to-modern-javascript
NOTE — If you get an error be sure that your Node version is up-to-date. Older versions of Node did not support ES modules natively, and you might run into issues.
If you cannot upgrade Node, consider using a tool like https://github.com/nvm-sh/nvm/blob/master/README.md (Mac/Linux) or https://github.com/coreybutler/nvm-windows (Windows) so you can run multiple version of Node simultaneously.
JavaScript now has a yearly release cycle. Every release introduces new features that aim to make us (JavaScript developers) more productive, while making our code succinct, expressive, and (potentially) bug-free.
By the end of this workshop you will be intimately familiar with the newer set of JavaScript features, and ready to dive into your next JavaScript project with confidence.
This workshop allows you to explore many of the new features that we use on a daily basis, including
let
and const
Map
/ Set
data-structuresclass
syntax as well as static/private members and methodsThis workshop uses tests to verify and validate all of your changes, so you can be confident in your changes as you go about refactoring your code to use modern JavaScript language features.
Each section of the workshop builds on the previous one, allowing you to continuously refactor code and get tons of practice.
If you want to be fully caught up with all of the changes that have landed in JavaScript between 2015 and 2021, this is the workshop you want to attend!
Please follow the instructions in the README.md file of https://github.com/looselytyped/refactoring-to-modern-javascript
NOTE — If you get an error be sure that your Node version is up-to-date. Older versions of Node did not support ES modules natively, and you might run into issues.
If you cannot upgrade Node, consider using a tool like https://github.com/nvm-sh/nvm/blob/master/README.md (Mac/Linux) or https://github.com/coreybutler/nvm-windows (Windows) so you can run multiple version of Node simultaneously.
Don't fear entropy, embrace it.
When you move toward distributed computing the likelihood of failure proportionally increases. It's not your fault, it's simply physics. Once you start spreading your data and applications across more devices, then access to resources such as CPU, memory, and I/O have a higher rate of failure.
Embrace entropy with chaos experiments and increase your cloud native capability model. We’ll investigate some of the leading chaos frameworks for Kubernetes and dive into hands-on experiments targeted within blast radiuses.
Game of Life is an intriguing game. At first look it looks simple, but as you look closer, it appears to be quite complex. How can we implement this game with different constraints, what are the constraints? Is it possible to use functional programming for this, to honor immutability? You see, it is intriguing.
We will discuss the constraints, think about how we may be able to solve them, and along the way discover how functional programming can play a role. We will have a fully working program, using live coding, at the end of this session, to illustrate some nice ideas that will emerge from our discussions.
Threads are considered lightweight but that's quite relative and in some context it appears to be rather heavyweight. The blocking nature of threads makes it rather unsuitable for asynchronous programming. That's the reason Java is introducing Virtual Threads.
In this presentation we will discuss the deficiencies of the current Thread infrastructure and, using practical examples, see how the virtual threads provide a much better model for asynchronous programming.
By now your organization has planted a flag in “the Cloud” and it is up to you to figure out just what that means to your application portfolio. Should everything be a microservice? Wait, what is a microservices anyway? How do you deal with massively distributed applications? What do you do with your legacy portfolio? In this presentation, we will walk through how the cloud impacts are applications and what we can do to successfully move our organizations forward.
By now your organization has planted a flag in “the Cloud” and it is up to you to figure out just what that means to your application portfolio. Should everything be a microservice? Wait, what is a microservices anyway? How do you deal with massively distributed applications? What do you do with your legacy portfolio? In this presentation, we will walk through how the cloud impacts are applications and what we can do to successfully move our organizations forward.
Over the last decade, DevOps has emerged as an influential business philosophy and practice, helping businesses drive high quality software to market faster. DevOps focuses on the elimination of bottlenecks that occur when development and operational resources are too divorced from one another. But what about friction in the development and test process? What about the delayed feedback cycles that come from slow builds and test flakiness? How can we reduce friction in areas that are outside of the focus of DevOps? Raju Gandhi, Developer Advocate at Gradle Enterprise, will talk about how the emerging practice of DPE addresses these questions. The presentation will include examples of DPE practices in action from Java projects using the Maven or Gradle build tool.
Attendees will walk away from this presentation with a better understanding of:
Distributed computing is hard. A significant challenge is typically when you want to get something done you have to call another service. Understanding how services are discovered and connected is fundamental to understanding Kubernetes' strengths.
We’ll walk through some networking concepts and hands-on examples of various techniques to understand simple to sophisticated traffic control and routing. Ingress and Istio will be demystified.
Our software engineering teams are facing new challenges every day that come with the rapidly changing industry and recent events.
People don't seem to be motivated by the same things that they previously found compelling. Organizations are giving up glamorous office spaces and pivoting to remote teams. These changes require us to quickly pivot as leaders to learn new skills, frameworks, and techniques to continue supporting our teams while maintaining our own sanity.
In this workshop, you will learn and practice key skills to help your team propel through today's modern challenges and setbacks. The topics will be relevant to all Software Engineering leaders who wish to find ways to motivate and inspire their teams - regardless of their organization's existing circumstances.
Specific topics will include:
Activities will require a handout - please download (and print if preferred) the course handout located in the Slides folder.
Our software engineering teams are facing new challenges every day that come with the rapidly changing industry and recent events.
People don't seem to be motivated by the same things that they previously found compelling. Organizations are giving up glamorous office spaces and pivoting to remote teams. These changes require us to quickly pivot as leaders to learn new skills, frameworks, and techniques to continue supporting our teams while maintaining our own sanity.
In this workshop, you will learn and practice key skills to help your team propel through today's modern challenges and setbacks. The topics will be relevant to all Software Engineering leaders who wish to find ways to motivate and inspire their teams - regardless of their organization's existing circumstances.
Specific topics will include:
Activities will require a handout - please download (and print if preferred) the course handout located in the Slides folder.
How do we move information realtime and connect machine learning models to make decisions on our business data? This presentation goes through machine learning and Kafka tools that would help achieve that goal.
In this presentation, we start with Kafka as our data backplane and how we get information to our pub/sub. As they enter Kafka, how do we sample that data and train our model, then how do we unleash that model on our real-time data? In other words, picture extracting samples for credit card approvals for training, then attaching the model for online processing: The moment we receive an application, we can either approve or disapprove a credit application based on a machine learning model trained on historical data. We will discuss other options as well like Spark, H2O, and more.
Hey. Remember that time when we used to create jar or war files and we used to just ssh into a box and deploy on a single box? Well, it was simpler but also maybe that wasn't that great of an idea. Time has certainly moved on, and our releases have become very advanced with very technical CI/CD pipelines, docker or debian packages, multi-purpose testing, producing signatures, perform security scans, perform releases, and then when you're done tell the whole world about it. Whew! This presentation introduces JReleaser, a release platform for Java that does a multitude of chores for you.
In this session we will discuss:
Since 1994, the original Gang of Four Design Patterns book, “Design Patterns: Elements of Reusable Object-Oriented Software” has helped developers recognize common patterns in development. The book was originally written in C++, but there have been books that translate the original design patterns into their preferred language. One feature of “The Gang of Four Design Patterns” that has particularly stuck with me has been testability for the most part. With the exception of singleton, all patterns are unit testable. Design Patterns are also our common developer language. When a developer says “Let's use the Decorator Pattern” we know what is meant.
What's new though is functional programming, so we will also discuss how these patterns change in our new modern functional programming world. For example, functional currying in place of the builder pattern, using an enum for a singleton and reconstructing the state pattern using sealed interfaces. We will cover so much more, and I think you will be really excited about this topic and putting it into practice on your own codebase.
We will be using a cloud environment, so you can relax
But if you want to, you don't have to, but if you want to run locally, you should have JDK 19
Since 1994, the original Gang of Four Design Patterns book, “Design Patterns: Elements of Reusable Object-Oriented Software” has helped developers recognize common patterns in development. The book was originally written in C++, but there have been books that translate the original design patterns into their preferred language. One feature of “The Gang of Four Design Patterns” that has particularly stuck with me has been testability for the most part. With the exception of singleton, all patterns are unit testable. Design Patterns are also our common developer language. When a developer says “Let's use the Decorator Pattern” we know what is meant.
What's new though is functional programming, so we will also discuss how these patterns change in our new modern functional programming world. For example, functional currying in place of the builder pattern, using an enum for a singleton and reconstructing the state pattern using sealed interfaces. We will cover so much more, and I think you will be really excited about this topic and putting it into practice on your own codebase.
We will be using a cloud environment, so you can relax
But if you want to, you don't have to, but if you want to run locally, you should have JDK 19
A Docker image is the artifact of the container world. Leaner images allow easier for quicker build times, less resource management (disk pressure and network usage), fewer attack vectors, and better performance when pulling or pushing images for storage or upon deployment. Lean images also produce smaller containers, which in turn require fewer resources at runtime, allowing for higher server density. Multistage Dockerfiles can help reduce the complexity of CI/CD pipelines by reducing the number of moving parts in building, testing, and producing a production-grade image. The key to building leaner (smaller) images, with little build-time overhead is to understand how Docker uses the Union File System (UFS), how Docker builds (and when it busts) the cache, and how to use the Dockerfile specification to it's fullest potential.
In this exercises driven, hands-on workshop, we will dive deep, peeking under the hood to get a glimpse of the Union File System, and then proceed to look at the effects of many of the important Dockerfile instructions. We will see how best to use them, and highlight any caveats that we should be aware of.
By the end of this class you will have gained a keen understanding of how best to write your Dockerfiles, and effectively build and design lean images, and containers.
A Docker image is the artifact of the container world. Leaner images allow easier for quicker build times, less resource management (disk pressure and network usage), fewer attack vectors, and better performance when pulling or pushing images for storage or upon deployment. Lean images also produce smaller containers, which in turn require fewer resources at runtime, allowing for higher server density. Multistage Dockerfiles can help reduce the complexity of CI/CD pipelines by reducing the number of moving parts in building, testing, and producing a production-grade image. The key to building leaner (smaller) images, with little build-time overhead is to understand how Docker uses the Union File System (UFS), how Docker builds (and when it busts) the cache, and how to use the Dockerfile specification to it's fullest potential.
In this exercises driven, hands-on workshop, we will dive deep, peeking under the hood to get a glimpse of the Union File System, and then proceed to look at the effects of many of the important Dockerfile instructions. We will see how best to use them, and highlight any caveats that we should be aware of.
By the end of this class you will have gained a keen understanding of how best to write your Dockerfiles, and effectively build and design lean images, and containers.
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
Microservices demand more than just a new architecture, they require a cultural shift. In this workshop, we'll cover:
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
Microservices demand more than just a new architecture, they require a cultural shift. In this workshop, we'll cover: