Microservices continues to be the latest buzzword in the industry, and probably will be for some time. If you are not sure what microservices is or want to start getting your feet wet in understanding the basics of this architecture style, then this session is just right for you. In this session I will cover the basics of the microservices architecture pattern. We'll talk about distributed architecture, what a microservice is, what the bounded context means, how to determine the right level of service granularity, the dangers of inter-service communication, and the role of the API layer. By the end of this session you will have a good idea of what the microservices architecture style is all about and whether it is a good fit for you.
Agenda:
.
.
Jorge Santayana is famous for saying “Those who cannot remember the past are condemned to repeat it”. When SOA (Service-Oriented Architecture) was all the craze, everyone got all excited about services, but forgot about the data. This ended in disaster. History repeats itself, and here we are with Microservices, where everyone is all excited about services, but once again, forgets all about the data. In this session I will discuss some of the challenges associated with breaking apart monolithic databases, and then show the techniques for effectively creating data domains and how to split apart a database. I consider the data part of Microservices the hardest aspect of this architecture style. In the end, it's all about the data.
Agenda
Once you break things apart into microservices, you must then put them back together. In other words, individual services still sometimes need to talk to one another to complete a given business transaction, whether that transaction is synchronous or asynchronous. In this session I talk about the various patterns of communication within microservices - orchestration, aggregation, and adapters. I also talk about coupling between services, including stamp coupling and bandwidth issues, and how to address these common communication woes.
Agenda
Have you ever wondered how to share data between microservices? Have you ever wondered how to share a single database schema between hundreds (or even thousands) of microservices (cloud or on-prem)? Have you ever wondered how to version relational database changes when sharing data in a microservices environment? If any of these questions intrigue you, then you should come to this session. In this session I will describe and demonstrate various caching strategies and patterns that you can use in Microservices to significantly increase performance, manage common data in a highly distributed architecture, and even manage data synchronization from cloud-based microservices. I'll describe the differences between a distributed and replicated cache, Using live coding and demos using Hazelcast and Apache Ignite, I'll demonstrate how to share data and also how to do space-based microservices, leveraging caching to its fullest extent.
Agenda:
Java is now on a six-month release schedule, with new features being added all the time. This talk summarizes the features that have been added to Java, including collection factory methods, private methods in interfaces, records, the enhanced switch statement, and more. The goal is to show not only what has been added, but why and where to use them.
Additional topics will include Local Variable Type Inference, the new HTTP client, the pattern matching, and text blocks. This talk will be updated throughout the year as more features are added to new releases.
Gradle is the build tool of choice in the open source world, and rapidly becoming the standard in industry as well. Anyone who works with Gradle on a Java project knows the basics of the Java plugin and how to write simple tasks in Groovy. Gradle can do much more, however. This talk will demonstrate how to write your own custom task classes and how to create Gradle plugins from them. Other Gradle features will be demonstrated as well, including file manipulation, incremental builds, generating the Grade wrapper, and resolving conflicts in dependencies.
Gradle Inc also provides a free build scan capability to analyze build files. This too will be demonstrated, as well as profiling your build, determining dependencies, and more.
Kotlin is a practical language designed for the JVM. It focuses on pragmatism and safety, with an emphasis on interoperability and tool support. Kotlin is statically typed and includes null checks in the type system itself. The language runs anywhere Java does, from microservices to Android apps. This workshop will show how to use Kotlin to simplify anything you originally planned to do in Java.
Kotlin combines object-oriented programming with functional features like lambda expressions, functional types, and data classes. Since it comes from JetBrains, the company that produces the IntelliJ IDEA development environment, its tool support is excellent. IntelliJ even provides a Java-to-Kotlin converter to help you get started.
This presentation will cover the basic types, defining and calling functions, programming with lambdas, higher order functions, and DSLs. The goal is to give you enough background to be productive while you continue to learn and grow with the language.
On the 2017 tour, I introduced the notion of “serverless” and Functions as a Service (FaaS) platforms. We understood the motivation for serverless computing, compared serverless to other cloud-native infrastructure approaches, navigated some architectural tradeoffs, and took a whirlwind tour of the Big 3 FaaS providers.
In this 2018 edition of the talk, we’ll still cover a few of the same themes to bring new folks up to speed, but we’ll also look at what’s changed in this ecosystem over the past year, take a look at new or enhanced features, offerings, runtimes, and programming models, and examine what use cases are becoming popular for serverless computing. We’ll also look at how tradeoffs have evolved, and definitely throw in a few demos.
In this presentation, we'll build, test, and deploy an image-processing pipeline using Amazon Web Services such as Lambda, API Gateway, Step Functions, DynamoDB, and Rekognition.
We'll take a look at some of the following topics:
As an architectural style, microservices are here to stay. They have crossed the proverbial chasm, and now it’s time to get to work. Microservices provide us with the ability to create truly evolutionary architectures composed of cohesive and autonomous components using well known and characterized distributed systems patterns.
As we create and compose components across the hard boundary of the network, we become deeply interested in establishing the correct boundaries and has resulted in renewed interest in system design and decomposition. Fortunately, the tried and true practices of Domain-Driven Design are available to us.
In this presentation, we will cover a distillation of strategic (bounded contexts, subdomains, context mapping) and tactical (aggregates, domain events) DDD techniques and demonstrate how they enable us to create effective event-driven microservices.
Chaos Engineering, pioneered by Netflix, is the discipline of experimenting on a distributed system in order to build confidence in the system's capability to withstand turbulent conditions in production.
In this presentation, we'll take a look at the problem of building resilient software, and discuss how applying Google's SRE principles and patterns for architectural resiliency can help us to solve it. We'll then examine how the practice of Chaos Engineering can help us to prove or disprove the resiliency of our systems.
All software architectures have to deal with stress. It’s simply the way the world works! Stressors come from multiple directions, including changes in the marketplace, business models, and customer demand, as well as infrastructure failures, improper or unexpected inputs, and bugs. As software architects, one of our jobs is to create solutions that meet both business and quality requirements while appropriately handling stress.
We typically approach stressors by trying to create solutions that are robust. Robust systems can continue functioning properly in the presence of internal and external challenges, but they also have one or more breaking points. When we pass a robust systems known threshold for a particular type of stress, it will fail. When a system encounters an “unknown unknown” challenge, it will usually not be robust!
Recent years have seen new approaches, including resilient, antifragile, and evolutionary architectures. All of these approaches emphasize the notion of adapting to changing conditions in order to not only survive stress but sometimes to benefit from it. In this class, we’ll examine together the theory and practice behind these architectural approaches.
JavaScript will celebrate it's 24th birthday in 2020. For a language that has been around for such a while it has seen very few, if any changes to the language itself. Well all that is about to change with ECMAScript.next (or ECMAScript 6). ECMAScript 6 modernizes JavaScript syntax, while bringing in features such as modules for better namespacing, class as a first class construct, and a variety of additional operators thus ensuring that JavaScript is ready for the next era of large scale modern web applications. ES 7, 8, 9 and now 10 all use the features introduced by ES6 to further the language.
In this session we will take a look at some of the features that ECMAScript 6 / 7 / 8 / 9 and 10 bring to the table. We will take an exploratory approach, and by the end of 3 hours, you will be well versed with ALL of the new features in JavaScript.
JavaScript will celebrate it's 24th birthday in 2020. For a language that has been around for such a while it has seen very few, if any changes to the language itself. Well all that is about to change with ECMAScript.next (or ECMAScript 6). ECMAScript 6 modernizes JavaScript syntax, while bringing in features such as modules for better namespacing, class as a first class construct, and a variety of additional operators thus ensuring that JavaScript is ready for the next era of large scale modern web applications. ES 7, 8, 9 and now 10 all use the features introduced by ES6 to further the language.
In this session we will take a look at some of the features that ECMAScript 6 / 7 / 8 / 9 and 10 bring to the table. We will take an exploratory approach, and by the end of 3 hours, you will be well versed with ALL of the new features in JavaScript.
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single metaanswer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
An integral part to any DevOps effort involves automation. No longer do we wish to manage tens, hundreds or even thousands of servers by hand, even if that were possible. What we need is a programmatic way to create and configure servers, be those for local development, all the way to production.
This is where tools like Ansible come into play. Ansible offers us a way to define what our server configurations are to look like using plain-text, version-controlled configuration files.
Not only does this help with avoiding “snow-flakes”, but it promotes server configuration to participate in the SDLC, pulling server configuration closer to the developers.
In this session we will explore what Ansible has to offer, decipher the Ansible terminology, and run some examples to configure a local server.
Ansible, like Git, aims to be a simple tool.
The benefit here is that the level of abstraction that Ansible offers is paper-thin, with no complicated workflows, or opinions enforced by the tool itself.
The downside is that without a prescribed approach to Ansible, developing your playbooks often becomes a case of trial-and-error.
As engineers steeped in the DevOps mindset we must be able to use the tool effectively, allowing us to accelerate and shorten the lead time from development to production.
In this session we will take a look at some lessons learned when working with Ansible. Topics covered:
In this session we will take a look at building applications with Angular. We will build a very simple application from the ground up, and attempt to understand the approach of Angular, as well as understand some of the terminology that Angular introduces.
This session will focus on the Angular 10
TypeScript, Components, Annotations/Directives, Observables, Reactive Stores, Model-Driven forms … Oh my! Angular, much like AngularJs (1.x.x), despite being a powerful platform for building rich client side applications, comes laden with both new terminology, and a “newer” approach to writing client side code.
In this session, as we build a simple application, we will attempt to tease apart these concepts, slowly building our understanding towards how these pieces come together, and how we can leverage them to build rich client side application.
Details
angular-cli
generatesAlong the way we will see how to use the Angular style guide to follow conventions adopted by the Angular community at large, and some ways to use the angular-cli
tool.
In this session we will take a look at building applications with Angular. We will build a very simple application from the ground up, and attempt to understand the approach of Angular, as well as understand some of the terminology that Angular introduces.
This session will focus on the Angular 10
TypeScript, Components, Annotations/Directives, Observables, Reactive Stores, Model-Driven forms … Oh my! Angular, much like AngularJs (1.x.x), despite being a powerful platform for building rich client side applications, comes laden with both new terminology, and a “newer” approach to writing client side code.
In this session, as we build a simple application, we will attempt to tease apart these concepts, slowly building our understanding towards how these pieces come together, and how we can leverage them to build rich client side application.
Details
pipes
in AngularAlong the way we will see how to use the Angular style guide to follow conventions adopted by the Angular community at large, and some ways to use the angular-cli
tool
We developers really like code.
Code, being plain-text, can be version-controlled, versioned, and follow a traditional SDLC lifecycle.
For the longest time however, we were forced to live with having most of our Ci/Cd and server configurations live outside of our codebases, often at the mercy of infrastructure/operations teams.
With the evolution of DevOps comes the notions of constructs like IaaC (Infrastructure-As-A-Code), and with Jenkins 2.0, we can now manage our Jenkins jobs configurations as code!
In this session we will explore the concept of “Pipelines-As-A-Code”, including the DSL that Jenkins offers, and how we can use this to configure Jenkins jobs via simple, version-controlled Jenkins files. We will see how we can create Jenkins jobs by autodiscovering repositories, as well as when we branch our code to create releases.
In this session we will get acquainted with Docker. We will discuss what docker is, how to install it, and how to start using Docker. We will also explore some of the benefits of containerizing your applications.
Containers are taking over the world. Containers provide a means to have hermatic builds of your software, allowing for truly immutable testing, and delivery of your software. Docker is one of many containerization technologies, and in this session we will take a brief look at Docker and what it has to offer.
In this session we will dive deeper into Dockerfiles. We will explore the DSL that Dockerfiles provide to allow for the automation of image creation.
Dockerfiles provide a means to automate the creation of images, and consequently the containers within which our applications run. The Dockerfile, though minimal, provides us with everything we need to package our software, and enable it to run. In this session we will dive deep into the Docker DSL, and explore the many commands that it provides, and along the way explore some differences between similar commands, and some gotchas.
We live, and operate in a world where our services do not need to be available 24/7, but also respond to surges in demand, and scale down when demands are less.
Reactive systems, and reactive architectures have arisen to address this very concern.
Vert.x is a ployglot toolkit that makes writing reactive applications on the JVM possible.
With a non-blocking, event driven architecture, Vert.x can help you scale your application, leveraging the multi-threaded nature of the JVM, and the multi-core abilities of your hardware.
In this session we will take a look at Vert.x, it's programming model and architecture.
We will write some code, and explore the ecosystem around Vert.x, as well as discuss some of the gotchas that you might come across as you develop Vert.x applications.
In this example-driven presentation, you'll learn how to leverage Spring Boot to accelerate application development, enabling you to focus coding on logic that drives application requirements with little concern for code that satisfies Spring's needs.
For over a decade, Spring has sought to make enterprise Java development easier. It began by offering a lighter alternative to EJBs, but continued to to address things such as security, working with various sorts of databases, cloud-native applications, and reactive programming. And, along the way, Spring even took steps to make itself easier to use, offering Java-based and automatic component configuration. Even so, there's still a lot of near-boilerplate code required to develop Spring applications.
Enter Spring Boot. Spring Boot's primary purpose is to make Spring easier to work with. It achieves this in three ways:
All together, Spring Boot lets you focus on fulfilling your application's requirements without worrying about writing code that satisfies the needs of a framework.
In this session, you'll learn how to take your Spring Boot skills to the next level, applying the latest features of Spring Boot. Topics may include Spring Boot DevTools, configuration properties and profiles, customizing the Actuator, and crafting your own starters and auto-configuration.
TBD
In this session, you'll learn how to take advantage of the latest features of Spring Security and Spring Boot to secure an application, both at the web layer and at the method-level.
Information is probably the most valuable item we now have; crooks are looking for ways to steal our data and identities by sneaking into unsecured applications. As software developers, we must take steps to protect the information that resides in our applications. Whether it’s an email account protected with a username/password pair or a brokerage account protected with a trading PIN, security is a crucial aspect of most applications.
Spring Security is an extension to the Spring Framework that provides security to Spring applications both at the HTTP request level as well as at method invocation.
In this session, you'll learn how to use Spring Data to rapidly develop repositories for a variety of database types, including relational (JPA and JDBC), document (Mongo), graph (Neo4j), and others (Redis, Cassandra, CouchBase, etc).
For decades, relational databases and SQL have enjoyed their position as the leading choice for data persistence. Even though many alternative database types have emerged in recent years, the relational database is still a top choice for a general purposes data store and will not likely be usurped from its position any time soon. When it comes to working with relational data, Java developers have several options.
But relational databases are not a one-size-fits-all solution. Thankfully, there are many options for data persistence, including relational, document, graph, key-value, and column-store databases, each presenting their unique way of handling data suitable for different problems.
Spring Data makes it easy to work with various databases by offering a programming model that is consistent, regardless of which type of database you're working with. And regardless of the database you're dealing with, you will find that Spring Data eliminates a lot of boilerplate code.
Serverless is a big topic these days. More specifically, Function as a Service (FaaS) enables small-scale functions deployed in the cloud to be enabled only as needed. Compared to Platform as a Service (PaaS) services which are “always on” and require explicit scaling, FaaS services wake up when needed and are scaled by the platform automatically.
Imagine a service that only gets invoked one per hour. In PaaS, that service is always running and ready to handle requests, consuming processor resources and ultimately costing more to be ready at a moment's notice. In contrast, FaaS services awaken quickly upon invocation and go away when finished, conserving resources and cost.
In this example-driven session, we'll see how to create and deploy Function as a Service (FaaS) services with Spring and Spring Boot.
In this session, we'll explore the new reactive features in Spring 5 to build reactive, non-blocking applications using Spring's familiar programming model.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 has introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
As a software developer, you've likely come across at least one veteran in our field who has shared tales of the old days when they used punch cards to give instructions to a computer. Thankfully, those days are long gone. Over time, the way we interact with machines has evolved through various stages such as textual, graphical, and touch/gestural user interfaces…up to today where Siri, Alexa, and Google Home are ushering in a new era of voice user interfaces.
In this session, we'll focus on Alexa, the voice assistant present in Amazon's line of Echo devices. We'll explore the Alexa Skills Kit (ASK) and see how to construct voice UIs (known as “skills”) to do our bidding. We'll see how to back those skills using Spring, including Spring MVC and Spring Cloud Function. And maybe, one day, we'll tell future generations about how we once had to actually touch computers.
Machine Learning is a huge, deep field. Come get a head start on how you can learn about how machines learn.
This talk will be an overview of the Machine Learning field. We’ll cover the various tools and techniques that are available to you to solve complex, data-driven problems. We’ll walk through the algorithms and apply them to some real but accessible problems so you can see them at work.
What happens if web applications got really fast?
We are increasingly able to do more in the browser because of faster networks, optimized JavaScript engines, new standard APIs and more. There is a new initiative to allow a binary format called WebAssembly that will provide a compiled, cross-platform representation that will take us to the next level. Complex business applications and 3D video games will alike will benefit from this new standard. Come hear about what it can do for you.
If you're not terrified, you're not paying attention.
Publishing information as webs of data does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources.
“Forewarned is forearmed”
-Old Proverb
If it seems like humans are easy to deceive, it's because we are. The myriad traits that make humankind so eminently exploitable are practically baked into our DNA. Too often these same traits make it into the software we build. This session takes an entertaining look at why humans are so easy to fool and goes on to explore what we can do to overcome our weaknesses and build more secure software.
Security is everyone's responsibility but the burden disproportionally falls on us. As software engineers, we are the last line of defense in our organization. We build the technology and that technology is constantly scanned, probed, and tested. Building truly secure software requires going beyond mere functional requirements; it requires a complete shift in how we think about problems.
Building Evolutionary Architectures requires identifying and creating architectural fitness functions. This hands-on workshop defines fitness functions and provides group exercises to help identify and discover them.
According to the Building Evolutionary Architectures book, an architectural fitness function provides an objective integrity assessment of some architectural characteristic(s). This hands-on workshop provides examples of fitness functions and group exercises to identify, define, and implement a variety of fitness functions: atomic, holistic, continuous, triggered, temporal, and others.
An evolutionary architecture supports incremental, guided change along multiple dimensions.
For many years, software architecture was described as the “parts that are hard to change later”. But then microservices showed that if architects build evolvability into the architecture, change becomes easier. This talk, based on my upcoming book, investigates the family of software architectures that support evolutionary change, along with how to build evolvable systems. Understanding how to evolve architecture requires understanding how architectural dimensions interact; I describe how to achieve appropriate coupling between components and services. Incremental change is critical for the mechanics of evolution; I cover how to build engineering and DevOps practices to support continuous change. Uncontrolled evolution leads to undesirable side effects; I cover how fitness functions build protective, testable scaffolding around critical parts to guide the architecture as it evolves.
The software development ecosystem exists in a state of dynamic equilibrium, where any new tool, framework, or technique leads to disruption and the establishment of a new equilibrium. Predictability is impossible when the foundation architects plan against changes constantly in unexpected ways. Instead, prefer evolvability over predictability. This keynote illustrates how to achieve evolutionary architectures and how to retrofit existing systems to support better evolution.
Stories and lessons from architecture, design, process, and other sources, each illustrating important principles and pitfalls for modern architects.
Those who cannot remember the past are condemned to repeat it. –George Santayana
The past is never dead. It's not even past. –William Faulkner
Most developers pursue the Latest and Greatest with intense fervor, yet the history of engineering, including software projects, contains rich lessons that we risk repeating ad nauseam. This session recounts a variety of stories of projects that failed architecturally…and why. Ranging from the Vasa in 1628 to Knight Capital in 2012, each story tells of a mistaken interpretation of some architectural fundamental principle and the consequences–some good, some less so. I I also look at the common threads for these stories, which resonates with problems many companies have but don't realize.
null
How do you create creativity? This talk offers techniques and perspectives to discover, grow, and project your ideas.
Where do ideas for new talks, books, videos, software…ultimately, everything…come from? A common question at conference panels to the speakers is “Where do you get your ideas for talks?” This session answers that question, along with how some of us cultivate new ideas. This talk investigates three aspects of ideation:
— How do new ideas arise?<br>
I cover various ways of synthesizing new ideas: switching axiom(s), mutation, oblique strategies, and a host of other techniques to generate a germ of a new idea
— How do ideas grow into intellectual property?<br>
I cover techniques for iterating on ideas to discover deeper meanings and connections. I also cover how techniques to evolve and grow ideas.
— How do you communicate new IP?<br>
I cover various ways to convey IP: presentations, articles, books, videos, and a host of other media. I talk about how writing and presenting techniques to amplify your new idea and get it out into the world.
One key to building new IP is separating ideation, organization, and realization, which often become jumbled. By separating them, we can build practices to allow each to flourish. This talk provide concrete advice in each area to help realize new ideas.
Creativity is the reason we have all the Cool Stuff we have. This session investigates how to fan the spark of an idea into a roaring flame of intellectual erudition.
While the Web itself has strong decentralized aspects to how it is used, the backend technologies are largely centralized. The naming systems, the routing systems and the traffic that all points back to the same place for a website are all centralized technologies. This creates both a liability as well as a control point.
In order to break free of some of these limitations, new technologies are emerging to provide a more decentralized approach to the Web.
This talk will walk you through some emerging technology to provide decentralized content storage and distribution, edge computing and more. We will touch upon the Interplanetary Filesystem, WebTorrent, Blockchain spin offs and more.
Documents contain a lot of information. We'll introduce you to a variety of techniques to extract them.
Machine Learning techniques are useful for analyzing numeric data, but they can also be useful for classifying text, extracting content and more. We will discuss a variety of open source tools for extracting the content, identifying elements and structure and analyzing the text can be used in distributed, microservice-friendly ways.
This workshop prepares web and application developers to build applications using Containers, Kubernetes, and OpenShift. We’ll start with a short introduction to containers and Kubernetes, which are the foundation of OpenShift.
Using hands-on exercises, we will walk you through tasty menu of applications and uses cases for OpenShift. How about showing you how easy it can be to deploy your pre-built containers? Maybe showing how health checks so OpenShift can heal your application. Let’s dig in to build containers by just giving a git repository. Want to see easy application scaling - no problem! Wish you could do A/B deployment - your wish is our command. Finally we will show you a complete microservice application with DB and polyglot backed services. You bring your curiosity and willingness to code and we will teach all you need to go home and start building pure awesomeness on OpenShift.
You have some modular code with a REST API. You are on your way to Microservices. Next, you package it in a container image that others can run. Simple. Now what? Your service needs to log information, needs to scale and load balance between its clones. Your service needs environment and metadata way outside its context. What about where the service will run? Who starts it? What monitors its health? What about antifragility? Updates? Networking? Oh my.
Don't get flustered. We will explore how Kubernetes simplifies the complexity of distributed computing.
This session will help you understand the terms, architecture and the mechanics of the Kubernetes tools. You will understand how to target your applications to a seemingly complex distributed compute platform.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts (1 of 2)
Aha moments with apps in containers can be quite liberating. The mobile space is saturated with “there's an app for that”. For us, we now expect “there's a container for that”. “Write once, run anywhere” (WORA) has changed to “Package once, run anywhere” (PORA). The growing community of containers is riding up the hype curve. We will look at many ways to assemble pods using architecture patterns you already know.
Your software package delivery and installation is no longer an rpm, deb, dmg, jar, war, native executable or a run script, it is simply an image that has a common run container command.
During the presentation, we will explore some examples on Katacoda.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts
At the 2009 Agile conference, J.B.Rainsberger declared “Integration tests are a scam”. I agree. Come see some compelling reasons why consumer-driven contract testing is a much better approach. Particularly for microservices.
We will explore different testing techniques on Kubernetes, including an important one called “Consumer-Driven Contracts”.
After a brief overview of the concepts a live demonstration will show you how to:
This is the droid you are looking for. Within this droid are hundreds of rules designed to review your code for defects, hotspots and security weaknesses. Consider the resulting analysis as humble feedback from a personal advisor. The rules come from your community of peers, all designed to save your butt.
We will explore techniques on how to add these checks to your IDE, your build scripts and your build pipelines.
Too much chatter in your pull requests? See how the analysis tools teach best practices, without ego or criticism, to a spectrum of developers. As a leader see how to develop an effective code quality intern program around this technique. We will also see some techniques to use Kubernetes to obtain reports and dashboards right on your local machine and from your continuous integration pipeline.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
From operating system on bare metal, to virtual machines on hypervisors, to containers orchestration platforms. How we run our code and bundle our applications continues to evolve. Serverless computing continuous our evolutionary path for our architectures.
Kubernetes provides an ideal, vendor-agnostic platform for serverless computing. Serverless promises to developers that we can worry less about the cluster and focus more on their logic. Based on your Kubernetes knowledge we will discover the various contributors of serverless frameworks on Kubernetes. Specifically, we will unpack how two open source serverless frameworks, Kubeless and OpenFaaS, leverage Kubernetes to achieve their promises. We will explore how Knative is helping the serverless providers evolve to the next levels sophistication.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
Kubernetes is a powerful platform for running containers and distributing computation workloads across resources. A significant question is how do you get all your code to this platform, continuously.
In 2019 our community is bursting with new solutions to assist our delivery pipelines. While Jenkins is a dominant player, there is a growing array of new ideas and choices. From coding at your laptop to building containers to deployments, we will explore the various tools and techniques to reduce the delivery frictions.
Kubernetes is also a fitting platform for hosting your continuous tools, pipeline engines, registries, testing, code analysis, security scans, and delivery workflows.
From this session, you will understand the latest tools and techniques for pipelining on Kubernetes. Let's up the game on your Maturity Model.
The fundamental testing libraries in Java have undergone complete redesigns in the past few years. JUnit 5, known as JUnit Jupiter, redesigns the most well-known tool in all of testing. This talk will demonstrate the new features, how they are intended to be used, and discuss experimental ideas in the pipeline.
JUnit has been remarkably stable over the years and is one of the most widely adopted frameworks in the Java world. The latest version, JUnit 5, takes JUnit to the next level. Full of new features like conditional test execution, parametric testing, labeling and filtering tests, and more, it brings all the modern thinking on testing into the JUnit world. It also takes advantage of the functional features added to Java since version 8 to create a powerful, new library for testing your code.
Vue is a new, progressive front end framework. At first glance, Vue might look like Yet Another Javascript Framework, it's time to take a look.
It is designed from the ground up to be incrementally adoptable, and can easily scale between a library and a framework depending on different use cases. It consists of an approachable core library that focuses on the view layer only, and an ecosystem of supporting libraries that helps you tackle complexity in large Single-Page Applications.
In this session we'll start with a look at how VueJS stacks up against the competition. We will explore VueJs from incremental adoption to building a full SPA. We'll the core concepts and capabilities and take a look at the growing ecosystem around it.
You understand the basics: The core vue library, templates, styling and components. You understand the how and why of Vuejs. What's next?
In this session we start with some core best practices for the vue ecosystem. We dive deeper into some capabilites of the core Vue library, as well as branching out into some of the offical add-ons; specifically:
Vutify, the Vue CLI, vue-router, and nuxt.js.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several realworld patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
Micronaut is a modern, JVM-based, full-stack framework for building modular, easily testable microservice applications.
In this session we'll dive deep into Micronaut, it's strengths, capabilities and best practices when building & testing services, functions and reactive apps.
Good discussions are supposed to diverge from their intended path. Free association is a feature, not a bug, and helps you see new connections between ideas. Without structure, however, it can be difficult to add context to new ideas and understand how they relate to more immediate problems. This talk discusses the technique of mental bookmarks – how to remember where you were when a discussion diverged. In addition to giving you a reputation for having an amazing memory, the skill also helps with personal awareness in general.
To give the technique context, we'll look at the fractal nature of success – the way we tend to see our current environment in relative terms, always comparing ourselves to those slightly more successful and slightly less successful.
It happens to us all; there are simply days where it seems impossible to get anything done. This session focuses on techniques and tips to get into the zone, stay in the zone and to protect your productivity, even in disruptive environments.
Rather than focusing on any one productivity methodology (e.g. GTD) This talk analyzes the internal and external factors that affect our productivity and offers broader strategies to get back on track.
This open source machine learning framework from Google has taken off. Come learn what you can do with it in your own organization.
TensorFlow is a powerful data flow-oriented machine learning framework developed by Google's Brain Team. It was designed to be easy to use and widely applicable on both numeric, neural network-oriented problems as well as other domains. We'll cover the over view as well as apply it to several fun, realistic problems.