In recent years the development of the Java language has accelerated significantly. Some of the more recent features are quite exciting and can lead to better modeling, fewer errors, less verbosity, and better developer productivity. In this presentation we will take a close look at these features and their significance. We will use concrete examples as use cases to explore the modern features. In addition to looking at these features we will also take a look under the hood at how these are implemented. This will help us to gain a better understanding of the builtin efficiencies these features come with.
Get ready to take part in some live coding as we dive into these amazing features.
Functional and reactive programming is gaining popularity and use. One hurdle developers face using these approaches is in exception handling. Dealing with exceptions in these styles is confusing in the beginning and is often messy and error prone. In this presentation we will step back and take a closer look at exception handling, about the functional and reactive programming and discuss the dos and don'ts for exception handling. We will learn using concrete examples and live coding to illustrate the problem and the possible solutions.
We will explore options available in different languages and libraries and learn from them
Have you looked at the methods of CompletableFuture and wondered how to actually use them? Do the function names appears strange or at least different from what we're used to. Let's take another look so those names become easier to remember and also to make use of.
In this presentation we will take a practical look at CompletableFuture and how to make good use of it, and how to deal with exceptions.
In this example-driven presentation, we'll focus on how to build reactive APIs in Spring. We'll start with Spring WebFlux, a reactive reimagining of the popular Spring MVC framework for HTTP-based APIs. Then we'll have a look at RSocket, an intriguing new communication protocol that is reactive by design.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
In this example-driven presentation, we'll take a high-level look at how Spring applies reactive programming at all layers of an application. You'll also learn the essentials of working with Project Reactor, the reactive programming library for Java that Spring's reactive support builds upon.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
In this example-driven presentation, we'll focus on working with reactive data persistence. We'll start by seeing how to create reactive repositories for relational databases with Spring Data R2DBC. Then we'll explore non-relational reactive persistence for MongoDB and Cassandra.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
In this example-driven session, we're going to look at how to implement GraphQL in Spring. You'll learn how Spring for GraphQL builds upon GraphQL Java, recognize the use-cases that are best suited for GraphQL, and how to build a GraphQL API in Spring.
Typical REST APIs deal in resources. This is fine for many use cases, but it tends to be more rigid and less efficient in others.
For example, in an shopping API, it's important to weigh how much or how little information should be provided in a request for an order resource? Should the order resource contain only order specifics, but no details about the order's line items or the products in those line items? If all relevant details is included in the response, then it's breaking the boundaries of what the resource should offer and is overkill for clients that do not need it. On the other hand, proper factoring of the resource will require that the client make multiple requests to the API to fetch relevant information that they may need.
GraphQL offers a more flexible alternative to REST, setting aside the resource-oriented model and focusing more on what a client needs. Much as how SQL allows for data from multiple tables to be selected and joined in response to a query, GraphQL offers API clients the possibility of tailoring the response to provide all of the information needed and nothing that they do not need.
Architecture is not a static representation of a system. There are several complexities and risks involved in creating them. One way to mitigate the risk is to evolve the architecture. But, there are risks of evolving as much as there are the risks of not evolving. In this interactive workshop we will explore a set of practices that we can use to mitigate the risks. Then we will dive into discussing some common and popular architectural patterns.
Finally, we will take some example applications and discuss how to evolve architecture to meet the needs of those applications.
Computer with git client installed to access the version control system which will have lab related material.
Architecture is not a static representation of a system. There are several complexities and risks involved in creating them. One way to mitigate the risk is to evolve the architecture. But, there are risks of evolving as much as there are the risks of not evolving. In this interactive workshop we will explore a set of practices that we can use to mitigate the risks. Then we will dive into discussing some common and popular architectural patterns.
Finally, we will take some example applications and discuss how to evolve architecture to meet the needs of those applications.
Computer with git client installed to access the version control system which will have lab related material.
There are some amazing features that are being introduced in Java in the coming years. Some of these features are just around the corner. However, these features, in various forms, have been in other languages for a while.
In this presentation we will take a look at some features in Kotlin, Scala, and JavaScript that are soon to be features of Java, understand their power and benefits and how we can make use of them. We will relate these features to how they are manifesting in Java.
The power of Java's functional programming goes far beyond lambdas and the filter, map of the Stream API. In this presentation we will look at various data transformations and see how the functional APIs in the JDK can provide concise solutions for these.
We will start with the Stream's API and quickly dive into some newer functions and also the utilities of the Collectors. Along the way you will get a deeper understanding of some of the excellent functions that are hidden in plain sight.
How do we move information realtime and connect machine learning models to make decisions on our business data? This presentation goes through machine learning and Kafka tools that would help achieve that goal.
In this presentation, we start with Kafka as our data backplane and how we get information to our pub/sub. As they enter Kafka, how do we sample that data and train our model, then how do we unleash that model on our real-time data? In other words, picture extracting samples for credit card approvals for training, then attaching the model for online processing: The moment we receive an application, we can either approve or disapprove a credit application based on a machine learning model trained on historical data. We will discuss other options as well like Spark, H2O, and more.
We have been using JUnit and doing TDD for years, but you can take testing further. In this session, we will discuss some tools you absolutely need for testing your code outside of the regular stack you currently use.
Hey. Remember that time when we used to create jar or war files and we used to just ssh into a box and deploy on a single box? Well, it was simpler but also maybe that wasn't that great of an idea. Time has certainly moved on, and our releases have become very advanced with very technical CI/CD pipelines, docker or debian packages, multi-purpose testing, producing signatures, perform security scans, perform releases, and then when you're done tell the whole world about it. Whew! This presentation introduces JReleaser, a release platform for Java that does a multitude of chores for you.
In this session we will discuss:
Hopefully, we started moving on from Java 8. One of the great benefits of doing so, and there are many, is a module system. It is a controversial topic indeed, but I am hoping in this presentation to make some solid arguments that it is an essential part of our development.
Although the Resource-Oriented Architecture is one of the oldest and most successful distributed architectures, it remains poorly understood and often completely overlooked today.
Much of the microservices architecture pattern is focused on taking applications apart although seemingly everyone has different ideas on how to put things back together again.
In this session, we will start with a summary of the resource-oriented architecture along with it's strengths and weaknesses. Michael will share hard-earned, real-world experience applying ROA concepts to complex microservice environments to successfully build an infinitely scalable, extensible, and understandable system.
Mob Programming is a style of programming in which the entire team sits together and
works on a single task at a time. Teams that have worked this way have found that
many of the problems that plague normal development just melted away, possibly because communication and learning increases. Teams also find that the quality of their code increases. They find their capacity to create increases. However, the best part of all this is that teams end up being happier and more cohesive.
In this session we introduce the core concepts of mob programming and then get handson mobbing on a coding kata.
Integration, once a luxury, is now a necessity. Doing this well, however, continues to be elusive. Early attempts to build better distributed systems such as DCOM, CORBA, and SOAP were widely regarded as failures. Today the focus is on REST, RPC, and graphql style APIs.
Which is best? The goto answer for architects is, of course, “it depends.”
In this session, we look at the various API approaches, how they attempt to deal with the challenge of decoupling client from server, evolvability, extensibility, adaptability, composability.
The biggest challenge is that needs change over time, and APIs must necessarily evolve. Versioning is challenging, and breaking changes are inevitable. You'll leave this session with a highlevel understanding of these approach, their respective tradeoffs and ultimately how to align your API approach with your architectural and organizational goals.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single metaanswer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several realworld patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
Continuous refactoring is critical to succeeding in projects and is an important part of sustainable agile development.
In this workshop, we will start by discussing how to approach refactoring, the essential steps we need to take, and look into how to incrementally improve the internal design of code to make it extensible, maintainable, and cost-effective to change. In addition to discussing the concepts, we will take several code examples from real projects, discuss the code smells and explore the refactoring techniques. Along the way, we will also dive into refactoring short code samples and measure the quality of code before and after refactoring.
Computer with git client to access git repository.
Java 8 or newer
Your favorite IDE
Continuous refactoring is critical to succeeding in projects and is an important part of sustainable agile development.
In this workshop, we will start by discussing how to approach refactoring, the essential steps we need to take, and look into how to incrementally improve the internal design of code to make it extensible, maintainable, and cost-effective to change. In addition to discussing the concepts, we will take several code examples from real projects, discuss the code smells and explore the refactoring techniques. Along the way, we will also dive into refactoring short code samples and measure the quality of code before and after refactoring.
Computer with git client to access git repository.
Java 8 or newer
Your favorite IDE
Game of Life is an intriguing game. At first look it looks simple, but as you look closer, it appears to be quite complex. How can we implement this game with different constraints, what are the constraints? Is it possible to use functional programming for this, to honor immutability? You see, it is intriguing.
We will discuss the constraints, think about how we may be able to solve them, and along the way discover how functional programming can play a role. We will have a fully working program, using live coding, at the end of this session, to illustrate some nice ideas that will emerge from our discussions.
Threads are considered lightweight but that's quite relative and in some context it appears to be rather heavyweight. The blocking nature of threads makes it rather unsuitable for asynchronous programming. That's the reason Java is introducing Virtual Threads.
In this presentation we will discuss the deficiencies of the current Thread infrastructure and, using practical examples, see how the virtual threads provide a much better model for asynchronous programming.
Over the past few years, the basic idioms and recommended programming styles for Java development have changed. Functional features are now favored, using streams, lambda expressions, and method references. The new sixmonth release schedule provides the language with new features, like modules and local variable type inference, much more frequently. Even the new license changes in the language seem to complicate installation, usage, and especially deployment.
The purpose of this training course is to help you adapt to the new ways of coding in Java. The latest functional approaches are included, including using parallel streams for concurrency, and when to expect them to be useful. All the new significant features added to the language will be reviewed and evaluated, with the goal understanding what problems they were designed to handle and when they can be used effectively in your code.
The workshop will use Java 21. You can get that from any major vendor, including Oracle. If you don't have a preferred vendor, then https://adoptium.net/ offers pre-built OpenJDK binaries for free.
We'll use IntelliJ IDEA for coding, but nothing in the materials requires any particular IDE. Only the Community edition is necessary, though the instructor will be using the Ultimate edition.
We will also use Gradle as our build tool, but most of the major IDEs can create Gradle-based Java projects without additional installs. You are welcome to use Maven if you prefer, but the instructor may not be able to help if you run into issues.
Over the past few years, the basic idioms and recommended programming styles for Java development have changed. Functional features are now favored, using streams, lambda expressions, and method references. The new sixmonth release schedule provides the language with new features, like modules and local variable type inference, much more frequently. Even the new license changes in the language seem to complicate installation, usage, and especially deployment.
The purpose of this training course is to help you adapt to the new ways of coding in Java. The latest functional approaches are included, including using parallel streams for concurrency, and when to expect them to be useful. All the new significant features added to the language will be reviewed and evaluated, with the goal understanding what problems they were designed to handle and when they can be used effectively in your code.
The workshop will use Java 21. You can get that from any major vendor, including Oracle. If you don't have a preferred vendor, then https://adoptium.net/ offers pre-built OpenJDK binaries for free.
We'll use IntelliJ IDEA for coding, but nothing in the materials requires any particular IDE. Only the Community edition is necessary, though the instructor will be using the Ultimate edition.
We will also use Gradle as our build tool, but most of the major IDEs can create Gradle-based Java projects without additional installs. You are welcome to use Maven if you prefer, but the instructor may not be able to help if you run into issues.
In this example-driven session, we'll review several tips and tricks to make the most out of your Spring development experience. You'll see how to apply the best features of Spring and Spring Boot, including the latest and greatest features of Spring Framework 6.x and Spring Boot 3.x with an eye to what's coming in Spring 7 and Boot 4.
Spring has been the de facto standard framework for Java development for nearly two decades. Over the years, Spring has continued to evolve and adapt to meet the ever-changing requirements of software development. And for nearly half that time, Spring Boot has carried Spring forward, capturing some of the best Spring patterns as auto-configuration.
As with any framework or language that has this much history and power, there are just as many ways to get it right as there are to get it wrong. How do you know that you are applying Spring in the best way in your application?
You'll need…
We live in a world of microservices. Yet, what is a microservice? What defines the boundaries of a microservice? How do we define the relationships between microservices? Thankfully domaindriven design gives us the concepts and practices to better design and decompose our services.
In this session we will consider many of the concepts of DDD — How bounded contexts use Ubiquitous language to model the domain, how context maps can be used to establish the interconnections between services as well aggregates and domains events, all of which will service us well as we go about creating our microservices.
In this session we will go further with DDD, considering aspects like aggregates, domain events, factories and repositories, and consider how DDD can be represented in architecture, be that hexagonal, REST or RPC.
If you need a deeper dive into DDD, then this is the session for you.
Albert Einstein once said — “We cannot solve our problems with the same thinking we used when we created them.”
As developers, architects, managers, tech-leads, our job entails solving big problems. However, our systems are composed of parts, and it is the interaction of those parts that ensure the system works, and tells us where to go looking when things fail. What does it mean to think in systems? What are the benefits, and what are tools are available to us? If any of this intrigues you (and it should) come join me as we explore a new way of thinking about problems, both big and small.
In this session we will take a page from Donella H. Meadows fantastic book “Thinking in Systems” and attempt to get an understanding of what makes up a system, how we can tease apart the moving parts of the system, and get into the mindset of a “systems thinker”.
A Docker image is the artifact of the container world. Leaner images allow easier for quicker build times, less resource management (disk pressure and network usage), fewer attack vectors, and better performance when pulling or pushing images for storage or upon deployment. Lean images also produce smaller containers, which in turn require fewer resources at runtime, allowing for higher server density. Multistage Dockerfiles can help reduce the complexity of CI/CD pipelines by reducing the number of moving parts in building, testing, and producing a production-grade image. The key to building leaner (smaller) images, with little build-time overhead is to understand how Docker uses the Union File System (UFS), how Docker builds (and when it busts) the cache, and how to use the Dockerfile specification to it's fullest potential.
In this exercises driven, hands-on workshop, we will dive deep, peeking under the hood to get a glimpse of the Union File System, and then proceed to look at the effects of many of the important Dockerfile instructions. We will see how best to use them, and highlight any caveats that we should be aware of.
By the end of this class you will have gained a keen understanding of how best to write your Dockerfiles, and effectively build and design lean images, and containers.
A Docker image is the artifact of the container world. Leaner images allow easier for quicker build times, less resource management (disk pressure and network usage), fewer attack vectors, and better performance when pulling or pushing images for storage or upon deployment. Lean images also produce smaller containers, which in turn require fewer resources at runtime, allowing for higher server density. Multistage Dockerfiles can help reduce the complexity of CI/CD pipelines by reducing the number of moving parts in building, testing, and producing a production-grade image. The key to building leaner (smaller) images, with little build-time overhead is to understand how Docker uses the Union File System (UFS), how Docker builds (and when it busts) the cache, and how to use the Dockerfile specification to it's fullest potential.
In this exercises driven, hands-on workshop, we will dive deep, peeking under the hood to get a glimpse of the Union File System, and then proceed to look at the effects of many of the important Dockerfile instructions. We will see how best to use them, and highlight any caveats that we should be aware of.
By the end of this class you will have gained a keen understanding of how best to write your Dockerfiles, and effectively build and design lean images, and containers.
You have been using Git for a while. You know how to stage and commit your work, create and delete branches and collaborate with your team members using remotes. But Git often leaves your confused — ever committed to your work to the wrong branch? Even worse, ever accidentally delete a branch that you needed to keep around? And what is God's good name is “Detached HEAD state”? Why tag commits, when we have branches? Is there a better work-flow than just using merges? What's the difference between a merge and a rebase?
The answer to all of these questions, and more, lies in the constitution of a commit, and the directed acyclic graph (DAG) that Git uses to manage your history. This, right here, is the key to understanding everything in Git.
In this hands-on workshop, we will level up your Git skills. We will foray into the underbelly of Git, and reveal the mystery behind the arcane interface that is the Git CLI.
By the end of this workshop, you will have a keen understanding on how best to use Git, as well as know how to dig yourself any prickly situation you might find yourself in. You will become your team's hero(ine). Most importantly, you will walk away with a keen appreciation of how beautiful and elegant Git really is.
You have been using Git for a while. You know how to stage and commit your work, create and delete branches and collaborate with your team members using remotes. But Git often leaves your confused — ever committed to your work to the wrong branch? Even worse, ever accidentally delete a branch that you needed to keep around? And what is God's good name is “Detached HEAD state”? Why tag commits, when we have branches? Is there a better work-flow than just using merges? What's the difference between a merge and a rebase?
The answer to all of these questions, and more, lies in the constitution of a commit, and the directed acyclic graph (DAG) that Git uses to manage your history. This, right here, is the key to understanding everything in Git.
In this hands-on workshop, we will level up your Git skills. We will foray into the underbelly of Git, and reveal the mystery behind the arcane interface that is the Git CLI.
By the end of this workshop, you will have a keen understanding on how best to use Git, as well as know how to dig yourself any prickly situation you might find yourself in. You will become your team's hero(ine). Most importantly, you will walk away with a keen appreciation of how beautiful and elegant Git really is.
The web is arguably the single most impactful revolution in human history (to date). By agreeing on a simple set of standards, we have collectively unlocked all the world's information. Documents can be discovered, retrieved, published, and shared so easily we don't even think about it.
Data, on the other hand, is a different story. Our data remains stuck in the 1980s. Locked in silos, each with a different format, interface, and conventions that must be interpreted by a human, parsed, mapped, and converted. Data is at the heart of many problems we solve today, and we produce data exponentially faster than we can consume it.
Today I can request any document from any server on the web. I need to know nothing about the underlying technology the server uses, nothing about how the information is stored or retrieved, and consume it instantly. We've been evolving those same capabilities with data over the past 20 years and the standards, tools, and technologies are reaching critical mass. The linked data revolution is now one that you can no longer ignore. Join us to see what you've been missing.
Completely Rewritten for 2023
There's nothing new or exciting about relational databases. We abstract them away with ORMS, grudgingly write a query here or there, but generally try to forget about them entirely. Then the performance and scalability problems begin. “Shading, the secret ingredient to the web-scale sauce” often won't help us.
The database is at the heart of nearly every system we build. Reading data and writing data account for the majority of performance bottlenecks. When it comes to SQL and relational databases, the syntax is easy, but the concepts often aren't. The most important knowledge is not obvious but it is necessary to make the right design, query, and optimization decisions.
Indexing, a glimpse under the hood of the storage engine and the query optimizer, and some best practices are all you need to know bring your DB skills head and shoulders above your peers and ready to build bigger, better, faster apps.
In tech teams it's a constant firefight. We react. Then we react to the reaction… the cycle continues. In all this noise, in all this chaos, how do we move forward. How do we remain proactive?
A great leader must be an enabler for the team. At times this means insulating the team from the noise. At other times it means improving the environment for the team. At all times, however, it requires setting clear priorities and conditions for success.
This session is focused on the art of moving forward in even the noisiest environments.
The Mockito framework is the most popular library for creating mocks, stubs, and spies for your tests. This talk reviews why and how you might want to do that, including unit vs integration tests, creating your own mocks and stubs, setting expectations, and verifying the results.
The Mockito documentation is notoriously misleading if you don't already know the principles behind the library. This talk gives an example that hopefully clears up any confusion and makes the docs useful. Many examples will be provided covering a wide range of capabilities. In addition to the basics, issues like mocking static methods, mocking final methods and classes, using spies for existing classes, and more will be examined.
The fundamental testing libraries in Java have undergone complete redesigns in the past few years. JUnit 5, known as JUnit Jupiter, redesigns the most well-known tool in all of testing. This talk will demonstrate the new features, how they are intended to be used, and discuss experimental ideas in the pipeline.
JUnit has been remarkably stable over the years and is one of the most widely adopted frameworks in the Java world. The latest version, JUnit 5, takes JUnit to the next level. Full of new features like conditional test execution, parametric testing, labeling and filtering tests, and more, it brings all the modern thinking on testing into the JUnit world. It also takes advantage of the functional features added to Java since version 8 to create a powerful, new library for testing your code.
Over the last decade, DevOps has emerged as an influential business philosophy and practice, helping businesses drive high quality software to market faster. DevOps focuses on the elimination of bottlenecks that occur when development and operational resources are too divorced from one another. But what about friction in the development and test process? What about the delayed feedback cycles that come from slow builds and test flakiness? How can we reduce friction in areas that are outside of the focus of DevOps? Raju Gandhi, Developer Advocate at Gradle Enterprise, will talk about how the emerging practice of DPE addresses these questions. The presentation will include examples of DPE practices in action from Java projects using the Maven or Gradle build tool.
Attendees will walk away from this presentation with a better understanding of:
Many have already seen what Reactive Streaming can do: RXJava, Akka Streams, Project Reactor. Now reactive streaming is a part of the canonical package for Java and now we can handle asynchronous pipelines with boundaries and make better well thought out applications
This session introduces you to the core components of Reactive Streams: Publisher, Subscriber, and Processor. We will discuss when to use Reactive Stream and how to avoid complicated scenarios like Backpressure. We will also cover some of the core problem spaces and how to overcome them. After understanding the core interfaces we will see how we use them with RXJava, Akka Streams, Reactor.
Testcontainers is a Java library that supports JUnit tests by providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. They help with integration tests by making it easy to download and set up database containers, containerized web browsers, and more.
The best part about the Testcontainers project is that the code is (mostly) simple to write and then just works. You add the proper dependency, instantiate the desired container as part of your tests, and run as usual. Demos will include Spring Boot projects with different databases and more.
Kafka is a “must know.” It is the data backplane of the modern microservice architecture. It's now being used as the first persistence layer of microservices and for most data aggregation jobs. As such, Kafka has become an essential product in the microservice and big data world.
This workshop is about getting started with Kafka. We will discuss what it is. What are the components, we will discuss the CLI tools, and how to program a Producer and Consumer.
Kafka is a “must know.” It is the data backplane of the modern microservice architecture. It's now being used as the first persistence layer of microservices and for most data aggregation jobs. As such, Kafka has become an essential product in the microservice and big data world.
This workshop is about getting started with Kafka. We will discuss what it is. What are the components, we will discuss the CLI tools, and how to program a Producer and Consumer.
Good discussions are supposed to diverge from their intended path. Free association is a feature, not a bug, and helps you see new connections between ideas. Without structure, however, it can be difficult to add context to new ideas and understand how they relate to more immediate problems. This talk discusses the technique of mental bookmarks – how to remember where you were when a discussion diverged. In addition to giving you a reputation for having an amazing memory, the skill also helps with personal awareness in general.
To give the technique context, we'll look at the fractal nature of success – the way we tend to see our current environment in relative terms, always comparing ourselves to those slightly more successful and slightly less successful.