Many agile teams (and programs) attempt to plan for an entire quarter at a time. Something changes—a better product opportunity, or a product development problem—and the quarter’s plan is not just at risk. That plan is now impossible. Instead of quarterly planning, consider continual planning. Continual planning allows a project or a program to use small deliverables to plan for the near future and replan often to deliver the most value.
Do you have to estimate schedule or cost for your innovation projects? If so, do you worry about providing an accurate—never mind precise—estimate? Estimation means “guess,” and our managers want our estimates to be commitments. Instead of providing guesses, help your managers understand risks with the ways you provide estimates or forecasts.
You will learn the three ways to provide an estimate, how to recover when your estimates are off, and how to start the conversation about value instead of cost.
Many agile teams collaborate with a Product Owner, or maybe a customer. But how many team members collaborate as a team? Too few. Why? Because the reward system reinforces resource efficiency, not flow efficiency.
Focusing on flow efficiency frees the team to collaborate and deliver—as a team. Flow efficiency helps the organization focus on throughput—the outcome—rather than busyness—the output.
Learn to see the signs of resource efficiency and flow efficiency. Learn what to measure and when to measure it. And, learn how too-common metrics, such as velocity and burndown might actually contribute to resource efficiency thinking instead of flow efficiency thinking and actions.
.
.
Jorge Santayana is famous for saying “Those who cannot remember the past are condemned to repeat it”. When SOA (Service-Oriented Architecture) was all the craze, everyone got all excited about services, but forgot about the data. This ended in disaster. History repeats itself, and here we are with Microservices, where everyone is all excited about services, but once again, forgets all about the data. In this session I will discuss some of the challenges associated with breaking apart monolithic databases, and then show the techniques for effectively creating data domains and how to split apart a database. I consider the data part of Microservices the hardest aspect of this architecture style. In the end, it's all about the data.
Agenda
In 250BC Rome began its expansion into Carthage, and later into the divided kingdoms of Alexander, starting the rise of a great empire until its decline starting around 350AD. Much can be learned from the rise and fall of the Roman Empire as it relates to a similar rise and fall: Microservices. Wait. Did I say “fall of microservices”? Over the past 5+ years Microservices has been on the forefront of most books, articles, and company initiatives. While some companies been experiencing success with microservices, most companies have been experiencing pain, cost overruns, and failed initiatives trying to design and implement this incredibly complex architecture style. In this session I discuss and demonstrate why microservices is so vitally important to businesses, and also why companies are starting to question whether microservices is the right solution. Sir Issac Newton once quoted “What goes up must come down”; Blood, Sweat & Tears sang about this in their hit “Spinning Wheel”. Microservices is no exception. Come to this provocative session to learn about the real challenges and issues associated with microservices, how we might be able to overcome some of the technical (and business) challenges, and whether microservices is really the answer to our problems.
.
Have you ever wondered how to share data between microservices? Have you ever wondered how to share a single database schema between hundreds (or even thousands) of microservices (cloud or on-prem)? Have you ever wondered how to version relational database changes when sharing data in a microservices environment? If any of these questions intrigue you, then you should come to this session. In this session I will describe and demonstrate various caching strategies and patterns that you can use in Microservices to significantly increase performance, manage common data in a highly distributed architecture, and even manage data synchronization from cloud-based microservices. I'll describe the differences between a distributed and replicated cache, Using live coding and demos using Hazelcast and Apache Ignite, I'll demonstrate how to share data and also how to do space-based microservices, leveraging caching to its fullest extent.
Agenda:
Software architecture is hard. It is full of tradeoff analysis, decision making, technical expertise, and leadership, making it more of an art than a science. The common answer to any architecture-related question is “it depends”. To that end, I firmly believe there are no “best practices” in software architecture because every situation is different, which is why I titled this talk “Essential Practices”: those practices companies and architects are using to achieve success in architecture. In this session I explore in detail the top 6 essential software architectural practices (both technical architecture and process-related practices) that will make you an effective and successful software architect.
This session is broken up into 2 parts: those essential architecture practices that relate to the technical aspects of an architecture (hard skills), and those that relate to the process-related aspects of software architecture (soft skills). Both parts are needed to make architecture a success.
Whether starting a new greenfield application or analyzing the vitality of an existing application, one of the decisions an architect must make is which architecture style to use (or to refactor to). Microservices? Service-Based? Microkernel? Pipeline? Layered? Space-Based? Event-Driven? SOA?. Having the right architecture style in place is essential to the success of any application, big or small. Come to this fast-paced session to learn how to analyze your requirements and domain to make the right choice about which architecture style is right for your situation.
Agenda
A lot of software development seems to be concerned with finding answers; troubleshooting issues; solving problems. But what if we’re not asking the right questions? Learning to ask the right questions in the various contexts in which we work is one of the key, yet underemphasized skills that any competent engineer should have!
In this session, we’ll examine several critical questions that you should keep in your toolbox. You can use these questions to evaluate tool and technology selection decisions, architecture strategy, data modeling decisions, and more. You should walk away with this session feeling equipped to engage in “adult conversations” about software engineering, where we cut through the hype and focus on the tradeoff decisions before us.
So what are the questions? Well, you’ll just have to come to the session to find out. But if you browse through my Twitter stream, you just might find a few.
NOTE: THIS IS A DISCUSSION ORIENTED SESSION. THERE WILL BE MINIMAL LECTURE AND SLIDES.
According to Wikipedia, metaphysics is the branch of philosophy that studies the essence of a thing. This definition invites the question, “Does software design have an essence?” And if it does, would the discovery and understanding of this essence lead to a fundamental improvement in our ability to build well-designed software? Does design matter? What exactly is the design of software? Can we point to it? Or is it something immaterial?
These are the types of questions that I want to confront in this session. We’ll draw inspiration from various individuals, from Don Norman to Fred Brooks to Jack Reeves to Christopher Alexander. We’ll consider their explorations of design, both in and out of the software realm. But ultimately, we’re going to have a conversation inspired by the Dialogues of Plato, and each iteration of this session will arrive at its own conclusions.
As an architectural style, microservices are here to stay. They have crossed the proverbial chasm, and now it’s time to get to work. Microservices provide us with the ability to create truly evolutionary architectures composed of cohesive and autonomous components using well known and characterized distributed systems patterns.
As we create and compose components across the hard boundary of the network, we become deeply interested in establishing the correct boundaries and has resulted in renewed interest in system design and decomposition. Fortunately, the tried and true practices of Domain-Driven Design are available to us.
In this presentation, we will cover a distillation of strategic (bounded contexts, subdomains, context mapping) and tactical (aggregates, domain events) DDD techniques and demonstrate how they enable us to create effective event-driven microservices.
As enterprises are attempting to develop more advanced microservices architectures, synchronous request/reply-centric operations are presenting multiple complex data challenges:
Event Sourcing and Command-Query Responsibility Separation (CQRS) are an effective pair of data and collaboration patterns that can help us address these challenges.
One of the key tasks involved in designing an Event Sourcing and CQRS solution is the choice of write and read models. The ideal write model supports long-term retention of an immutable, append-only log of events, organized by topics to which clients can subscribe. The ideal read model is…well, whatever you need it to be to support your query needs!
In this session, we’ll examine how effective Apache Kafka can be at supplying both write and read models, as well as look at other options for read models in the open source ecosystem.
Connascence is a term coined by Meilir Page-Jones in his article “Comparing Techniques by
Means of Encapsulation and Connascence.” It is a software quality metric that attempts to measure coupling between entities in three dimensions: strength, locality, and degree. Rather than describing architectural coupling as simply loose or tight, connascence gives us a tool to describe and measure the huge diversity in coupling that we encounter in systems. Relatedly, it also gives us a tool for encouraging high cohesion within modules.
Connascence can also be divided into static and dynamic categories, with a long list of specific types such as connascence of name, meaning, timing and identity.
Page-Jones gives us a rule to follow, “Eliminate any unnecessary connascence and then minimize connascence across encapsulation boundaries by maximizing connascence within encapsulation boundaries,” which he says can be applied to “any future design paradigm with partitioning, encapsulation and visibility rules.”
In this session, we’ll examine connascence as it appears within architectural patterns such as REST, message-driven collaboration, and microservices, as well as within various serialization techniques. You’ll walk away with tools to help you better describe and manage coupling within your system architectures.
All software architectures have to deal with stress. It’s simply the way the world works! Stressors come from multiple directions, including changes in the marketplace, business models, and customer demand, as well as infrastructure failures, improper or unexpected inputs, and bugs. As software architects, one of our jobs is to create solutions that meet both business and quality requirements while appropriately handling stress.
We typically approach stressors by trying to create solutions that are robust. Robust systems can continue functioning properly in the presence of internal and external challenges, but they also have one or more breaking points. When we pass a robust systems known threshold for a particular type of stress, it will fail. When a system encounters an “unknown unknown” challenge, it will usually not be robust!
Recent years have seen new approaches, including resilient, antifragile, and evolutionary architectures. All of these approaches emphasize the notion of adapting to changing conditions in order to not only survive stress but sometimes to benefit from it. In this class, we’ll examine together the theory and practice behind these architectural approaches.
Java is a language in evolution. There are a handful of language changes in Java 9 and 10 plus several JDK changes in 9, 10, 11, and 12. Some of these changes are significant in that they allow us to do things more effectively than before. The difference can be anywhere from reducing code to avoiding errors that come from verbosity. In this presentation we will explore the language changes first. Then we will visit the additions to the JDK. Along the way we will also look at a few things that have been removed from Java as well.
.
We will program with Java quite differently in the future than we do today. The reason is that Java is embracing asynchronous programming like never before. This will have a huge impact on how we create services and web applications. In this presentations we will look at what asynchronous programming is, what continuations are, how they get implemented under the hood, and how we can benefit from them.
.
Java Modules are the future. However, our enterprise applications have legacy code, a lots of it. How in the world do we migrate from the old to the new? What are some of the challenges. In this presentation we will start with an introduction to modules and learn how to create them. Then we will dive into the differences between unnamed modules, automatic modules, and explicit modules. After that we will discuss some key limitations of modules, things that may surprise your developers if they're not aware of. Finally we will discuss how to migrate current applications to use modules.
.
Vue is a new, progressive front end framework. At first glance, Vue might look like Yet Another Javascript Framework, it's time to take a look.
It is designed from the ground up to be incrementally adoptable, and can easily scale between a library and a framework depending on different use cases. It consists of an approachable core library that focuses on the view layer only, and an ecosystem of supporting libraries that helps you tackle complexity in large Single-Page Applications.
In this session we'll start with a look at how VueJS stacks up against the competition. We will explore VueJs from incremental adoption to building a full SPA. We'll the core concepts and capabilities and take a look at the growing ecosystem around it.
You understand the basics: The core vue library, templates, styling and components. You understand the how and why of Vuejs. What's next?
In this session we start with some core best practices for the vue ecosystem. We dive deeper into some capabilites of the core Vue library, as well as branching out into some of the offical add-ons; specifically:
Vutify, the Vue CLI, vue-router, and nuxt.js.
You've got an legacy MV* app. It's hard to maintain, hard to test, and it's a thorn in your side. You're keeping it running but both the code, and your skillset are getting a little old. If a ground-up rewrite is out of the question, you can use Vue to piecemeal refactor and modernize the app with minimal risk and zero downtime.
One of my favorite features of Vue.js is that it is incrementally adoptable, but what does this mean? Ultimately you can take any part of a webpage and turn it into a vue component. Does that piece require some jQuery widget? No problem. The libraries work just fine, side by side.
This session provides a roadmap for taking a legacy app and incrementially turning it into a modern vue masterpiece.
You have some modular code with a REST API. You are on your way to Microservices. Next, you package it in a container image that others can run. Simple. Now what? Your service needs to log information, needs to scale and load balance between its clones. Your service needs environment and metadata way outside its context. What about where the service will run? Who starts it? What monitors its health? What about antifragility? Updates? Networking? Oh my.
Don't get flustered. We will explore how Kubernetes simplifies the complexity of distributed computing.
This session will help you understand the terms, architecture and the mechanics of the Kubernetes tools. You will understand how to target your applications to a seemingly complex distributed compute platform.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts (1 of 2)
Aha moments with apps in containers can be quite liberating. The mobile space is saturated with “there's an app for that”. For us, we now expect “there's a container for that”. “Write once, run anywhere” (WORA) has changed to “Package once, run anywhere” (PORA). The growing community of containers is riding up the hype curve. We will look at many ways to assemble pods using architecture patterns you already know.
Your software package delivery and installation is no longer an rpm, deb, dmg, jar, war, native executable or a run script, it is simply an image that has a common run container command.
During the presentation, we will explore some examples on Katacoda.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts
At the 2009 Agile conference, J.B.Rainsberger declared “Integration tests are a scam”. I agree. Come see some compelling reasons why consumer-driven contract testing is a much better approach. Particularly for microservices.
We will explore different testing techniques on Kubernetes, including an important one called “Consumer-Driven Contracts”.
After a brief overview of the concepts a live demonstration will show you how to:
This is the droid you are looking for. Within this droid are hundreds of rules designed to review your code for defects, hotspots and security weaknesses. Consider the resulting analysis as humble feedback from a personal advisor. The rules come from your community of peers, all designed to save your butt.
We will explore techniques on how to add these checks to your IDE, your build scripts and your build pipelines.
Too much chatter in your pull requests? See how the analysis tools teach best practices, without ego or criticism, to a spectrum of developers. As a leader see how to develop an effective code quality intern program around this technique. We will also see some techniques to use Kubernetes to obtain reports and dashboards right on your local machine and from your continuous integration pipeline.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
From operating system on bare metal, to virtual machines on hypervisors, to containers orchestration platforms. How we run our code and bundle our applications continues to evolve. Serverless computing continuous our evolutionary path for our architectures.
Kubernetes provides an ideal, vendor-agnostic platform for serverless computing. Serverless promises to developers that we can worry less about the cluster and focus more on their logic. Based on your Kubernetes knowledge we will discover the various contributors of serverless frameworks on Kubernetes. Specifically, we will unpack how two open source serverless frameworks, Kubeless and OpenFaaS, leverage Kubernetes to achieve their promises. We will explore how Knative is helping the serverless providers evolve to the next levels sophistication.
Kubernetes out of the box is a strong platform for running and coordinating large collections of services, containers, and applications. As is, Kubernetes is powerful for many solutions.
Remember Underdog? He was a mild-mannered dog, but when stress and conflict were introduced to the plot he took a magic pill, he became a superhero. Istio is a superhero for Kubernetes.
Istio is an open, platform-independent service mesh that manages communications between services in a transparent way. With a distributed architecture many things can fail, slow down and become less secure. Istio provides solutions to those stresses in our plot toward this architecture style:
• Traffic management
• Observability
• Policy enforcement
• Service identity and security
We will explore these benefits with Istio through some working examples on Kubernetes. The key point is this meshing solution allows your Java code to be less coupled to the inherent weaknesses of a distributed platform.
Explore another learning medium to add to your toolbox: Katacoda.
This is a 90-minute mini-workshop where you learn to be an author on Katacoda. Bring your favorite laptop with just a browser and a text editor.
Have a Github account and bring your laptop. Let's learn together.
We are continuously learning and keeping up with the changing landscapes and ecosystems in software engineering. Some technologies are difficult to learn or may take too much time for us to set up just to get to the key points of each technology. One of the reasons why you might be here at NFJS is to do exactly that – too learn. Great!
There are many mediums we use to learn and we often combine them for different perspectives. Books, how-to articles, GitHub readmes, blog entries, recorded talks on YouTube, and online courses. All these help us sort through the new concepts. I'm sure you have your favorites.
Katacoda is becoming a compelling platform for learning and teaching concepts. You can also author your own topics for public communities or private teams. Katacoda offers a platform that hosts live server command lines in your browser with a split screen for course material broken into easy to follow steps.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
Kubernetes is a powerful platform for running containers and distributing computation workloads across resources. A significant question is how do you get all your code to this platform, continuously.
In 2019 our community is bursting with new solutions to assist our delivery pipelines. While Jenkins is a dominant player, there is a growing array of new ideas and choices. From coding at your laptop to building containers to deployments, we will explore the various tools and techniques to reduce the delivery frictions.
Kubernetes is also a fitting platform for hosting your continuous tools, pipeline engines, registries, testing, code analysis, security scans, and delivery workflows.
From this session, you will understand the latest tools and techniques for pipelining on Kubernetes. Let's up the game on your Maturity Model.
A number of developers and organizations are beginning to make use of Functional Programming in Java. With anything that's new, we often learn it the hard way. By stepping back and taking a look at programming style as idioms, we can quickly gravitate towards better coding style and also avoid some common traps that we often get drawn towards.
.
Based on the book with the same title as this talk, this presentation will explore the modern features of JavaScript that make this a pleasant language to use. Come to this presentation to get a quick jumpstart on the elegant capabilities from passing arguments, to creating iterators, to destructring, and working with classes using ES 6 and beyond.
.
JavaScript modules are highly powerful but an be confusing with the number of options available to explore and import. In this presentation we will take a through look at what modules are, how to export from a module, how to import, and the dos and don'ts. This will help you get comfortable with modules as you develop applications with JavaScript libraries/frameworks, like Angular, React, Vue, etc.
.
Some developers simply hate type inference. And then there others who love it. Neither one of them is entirely right. In Java we have been making extensive use of type inference for several years without realizing it. The introduction of “var” in Java 10 has stirred up some surprising debate. In this presentation we will step back and review type inference in Java. Then we will dive deep into type inference in Java 10 and 11. We will wrap up the presentation will good recommendations on when to use type inference and when to avoid it.
.
Machine Learning is a huge, deep field. Come get a head start on how you can learn about how machines learn.
This talk will be an overview of the Machine Learning field. We’ll cover the various tools and techniques that are available to you to solve complex, data-driven problems. We’ll walk through the algorithms and apply them to some real but accessible problems so you can see them at work.
Documents contain a lot of information. We'll introduce you to a variety of techniques to extract them.
Machine Learning techniques are useful for analyzing numeric data, but they can also be useful for classifying text, extracting content and more. We will discuss a variety of open source tools for extracting the content, identifying elements and structure and analyzing the text can be used in distributed, microservice-friendly ways.
Deep Learning is an evolution of the capabilities of more conventional machine learning to take advantage of the extra data available from Big Data systems. With more data, many of the manual aspects of feature selection and other machine learning steps can be derived automatically. We will highlight many of the main deep learning frameworks and give you a hands on introduction to what is possible and how you can start to use them.
We will cover:
What comes after machine learning and deep learning? How about dynamic systems that need new ways of finding paths through complex scenarios such as video games, challenging board games and more.
In addition to covering the main ideas of deep reinforcement learning, we will cover some of the main tools and frameworks
An overview of various popular streaming technologies on the JVM: Kafka Streams, Apache Storm, Spark Streaming, Apache Beam. Discuss “the bill of rights” of what to expect of all streaming libraries and frameworks, security, failover, exactly once processing.
Streaming is now an essential part of our life. We have cheaper drives, faster networks, and more memory. We can haul tons of data, but we need to process that data, manipulate and enrich. To do so we need some sort of streaming solution. Let's look at the most common ones and expose the differences and similarities between frameworks so you, the attendee, can make a better decision.
Kafka is more than just a messaging queue with storage. It goes beyond that and with technology from Confluent open source it has become a full-fledged data ETL and data streaming ecosystem.
When we utter the words, Kafka, it is no longer just one component but can be an entire data pipeline ecosystem to transform and enrich data from source to sink. It offers different ways to handle that data as well. In this presentation, we define:
We then discuss KSQLDB. A SQL layer built upon Kafka Streams that provides a simple query language to perform streaming operations
For those still grappling with Generics. This will be an attempt to clear the air about generics. What are wildcards? What is extends
? What is super
? What is covariance? What is contravariance? What is invariance? What is erasure? Why and when do I need this?
Generics or parameterized type is one of the more pain items in any statically typed language on the JVM. This presentation is set to overcome some of these hurdles and understand some of these confusing terms. We will cover the following:
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several realworld patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single metaanswer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
Many new features have been added between the last Long Term Support release in Java 8 and the current one in Java 11. This talk will summarize many of those capabilities, from the Jigsaw implementation of JPMS to unmodifiable collections to local variable type inference and more. In addition to the basic code changes, the new six-month release schedule and associated licensing issues will be reviewed.
If, as anticipated, Java 12 is released in March and Java 13 in September, new features from those versions will also be included, even though they will break the joke in the title of this talk.
Kotlin is a practical language designed for the JVM. It focuses on pragmatism and safety, with an emphasis on interoperability and tool support. Kotlin is statically typed and includes null checks in the type system itself. The language runs anywhere Java does, from microservices to Android apps. This workshop will show how to use Kotlin to simplify anything you originally planned to do in Java.
Kotlin combines object-oriented programming with functional features like lambda expressions, functional types, and data classes. Since it comes from JetBrains, the company that produces the IntelliJ IDEA development environment, its tool support is excellent. IntelliJ even provides a Java-to-Kotlin converter to help you get started.
This presentation will cover the basic types, defining and calling functions, programming with lambdas, higher order functions, and DSLs. The goal is to give you enough background to be productive while you continue to learn and grow with the language.
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
If you are interested in a different approach to writing your next micro-service, or are knee deep in the DevOps world with Kubernetes and Docker (both written using Go) you need to know go.
Come join me in a rather quick introduction to the language and it's merits and short-comings.
Micro-services, DevOps, command-line utilities — Go has been the catalyst in a quiet revolution happening right under our noses. Go, from Google, aims to be a language that is simple, with the aim of writing scalable and reliable software. Go brings a unique tilt to many aspects of language design, including enforcing a strict project structure, powerful tooling to support things like code-style enforcement, as well as “goroutines” to allow for concurrency.
Ansible, like Git, aims to be a simple tool.
The benefit here is that the level of abstraction that Ansible offers is paper-thin, with no complicated workflows, or opinions enforced by the tool itself.
The downside is that without a prescribed approach to Ansible, developing your playbooks often becomes a case of trial-and-error.
As engineers steeped in the DevOps mindset we must be able to use the tool effectively, allowing us to accelerate and shorten the lead time from development to production.
In this session we will take a look at some lessons learned when working with Ansible. Topics covered:
An integral part to any DevOps effort involves automation. No longer do we wish to manage tens, hundreds or even thousands of servers by hand, even if that were possible. What we need is a programmatic way to create and configure servers, be those for local development, all the way to production.
This is where tools like Ansible come into play. Ansible offers us a way to define what our server configurations are to look like using plain-text, version-controlled configuration files.
Not only does this help with avoiding “snow-flakes”, but it promotes server configuration to participate in the SDLC, pulling server configuration closer to the developers.
In this session we will explore what Ansible has to offer, decipher the Ansible terminology, and run some examples to configure a local server.
Graal is a VM and an awesome VM at that. Able to run a variety of languages and fast. The execution times can be impressive too. This VM can run anything, JavaScript, Python 3, Ruby, R, JVM-based languages like Java, Scala, Kotlin, and LLVM-based languages such as C and C++.
We are living in truly exciting times. So much interesting technology including the VM space. Graal is a virtual machine and shared memory system for multiple languages. GraalVM can either run standalone or embedded in OpenJDK or node.js. Graal can even embed inside databases such as MySQL or Oracle. In the presentation, we look at this exciting VM, how to start it, how to run polyglot applications, and how to integrate all within the same VM.
Reactive architecture patterns allow you to build self-monitoring, self-scaling, self-growing, and self-healing systems that can react to both internal and external conditions without human intervention. These kind of systems are known as autonomic systems (our human body is one example). In this session I will show you some of the most common and most powerful reactive patterns you can use to automatically scale systems, grow systems, and self-repair systems, all using the basic language API and simple messaging. Through code samples in Java and actual run-time demonstrations, I'll show you how the patterns work and also show you sample implementations. Get ready for the future of software architecture - that you can start implementing on Monday.
Agenda
We developers really like code.
Code, being plain-text, can be version-controlled, versioned, and follow a traditional SDLC lifecycle.
For the longest time however, we were forced to live with having most of our Ci/Cd and server configurations live outside of our codebases, often at the mercy of infrastructure/operations teams.
With the evolution of DevOps comes the notions of constructs like IaaC (Infrastructure-As-A-Code), and with Jenkins 2.0, we can now manage our Jenkins jobs configurations as code!
In this session we will explore the concept of “Pipelines-As-A-Code”, including the DSL that Jenkins offers, and how we can use this to configure Jenkins jobs via simple, version-controlled Jenkins files. We will see how we can create Jenkins jobs by autodiscovering repositories, as well as when we branch our code to create releases.
The fundamental testing libraries in Java have undergone complete redesigns in the past few years. JUnit 5, known as JUnit Jupiter, redesigns the most well-known tool in all of testing. This talk will demonstrate the new features, how they are intended to be used, and discuss experimental ideas in the pipeline.
JUnit has been remarkably stable over the years and is one of the most widely adopted frameworks in the Java world. The latest version, JUnit 5, takes JUnit to the next level. Full of new features like conditional test execution, parametric testing, labeling and filtering tests, and more, it brings all the modern thinking on testing into the JUnit world. It also takes advantage of the functional features added to Java since version 8 to create a powerful, new library for testing your code.
The Gradle build tool is one of the most successful projects in the Groovy ecosystem because it addresses a difficult problem – every major build is a custom build. Gradle builds are written in Groovy, so the full power of the language is available if you need it. Gradle supports Maven project structure and repositories and uses Ivy dependency management without being bound by their normal constraints. With major systems like Grails, Hibernate, and the Spring Framework moving to Gradle, this is a technology worth taking the time to understand.
This talk will cover the basics of Gradle both through simple examples and by examining the build files for major open source projects.
“Forewarned is forearmed”
-Old Proverb
If it seems like humans are easy to deceive, it's because we are. The myriad traits that make humankind so eminently exploitable are practically baked into our DNA. Too often these same traits make it into the software we build. This session takes an entertaining look at why humans are so easy to fool and goes on to explore what we can do to overcome our weaknesses and build more secure software.
Security is everyone's responsibility but the burden disproportionally falls on us. As software engineers, we are the last line of defense in our organization. We build the technology and that technology is constantly scanned, probed, and tested. Building truly secure software requires going beyond mere functional requirements; it requires a complete shift in how we think about problems.
One of the biggest impediments to overall developer productivity and the overall success of the software organization is inefficient processes. Without the right tooling to get to the root of the problem, debugging build and test failures is incredibly frustrating and leads to delays in shipping software.
In this workshop, you’ll work through examples using Maven, Gradle, and Gradle Enterprise on our real data and that of some popular open source projects. You'll learn how to measure build speed and reliability, which metrics are important, how to apply these analyses to your own builds, and how to use build caching to make those builds dramatically faster enabling your team to achieve better developer productivity.
With large tech companies such as Google famously touting the advantages of their monorepo strategy and other top tech companies talking about their equally impressive multirepo solutions, it can be difficult to decide which strategy is best for you and your team. Luckily, Gradle has plenty of experience helping organizations of all sizes and build tools through monorepo and multirepo transitions.
In this session, we hope to share our expertise no matter your build tool of choice. We’ll cover all the important decisions of both the monorepo and multirepo paths so that you can leave prepared for the path you choose and confident that you won’t end up abandoning the transition or stuck in the worst of both worlds.
These topics include how to:
work with and not against your organizational structure
select a new source code repository or modify an existing one
help geographically distributed teams
setup language and platform agnostic infrastructure
model the repository structure onto CI pipelines
factor performance and reliability tradeoffs into build tool choice
While the Web itself has strong decentralized aspects to how it is used, the backend technologies are largely centralized. The naming systems, the routing systems and the traffic that all points back to the same place for a website are all centralized technologies. This creates both a liability as well as a control point.
In order to break free of some of these limitations, new technologies are emerging to provide a more decentralized approach to the Web.
This talk will walk you through some emerging technology to provide decentralized content storage and distribution, edge computing and more. We will touch upon the Interplanetary Filesystem, WebTorrent, Blockchain spin offs and more.
Event Storming is a low-tech and powerful technique for creating models of business processes that translate directly into the realm of Domain-Driven Design. All you need is a combination of domain experts and engineers, a large wall to act as a modeling surface, several colors of sticky notes, and markers.
Event Storming is much better caught rather than taught, so in this session, we’ll have the opportunity to work together to understand and model an exemplary problem domain in small groups. You’ll need to bring a laptop to this session and create a free Realtime Board account to use as a virtual modeling surface that you can take away from the session.