A down in the trenches look at building, running and day-to-day development with a Continuous Delivery pipeline. This talk is based on my experiences building multiple CD pipelines and optimizing developer workflows to push changes to production all day. I'll walk you through how we transformed a two-day deployment process into a 20-minute CD pipeline and then go on to perform more than 20,000 deployments in one year.
During this presentation we'll walk through the evolution of a team teetering on collapse. Production deployments are a long running ceremony that hasn't really changed in years. Deployments are risky and everyone involved with the project acts accordingly, deployments can take days and the company website has scheduled maintenance windows.
Over several months, the team will transform into a model of agile process mastery. Deployments will take minutes instead of days. The team's structure and concerns over deploying to production will also change shape.
During this talk we'll dig into the anatomy of a continuous delivery pipeline, what it is, how it works, and the challenges you'll face making the transition. Where do you start and what are the big four considerations of continuous delivery? Do you need company buy-in or can you start small and grow out to the rest of the organization?
We'll walk through the entire process, talk about team organization, breaking up the monolith, your first steps towards CD, identifying your primary objective, the building blocks of a Microservices architecture, the psychology of continuous delivery, how to write effective code in a CD ecosystem, and we'll build a continuous delivery pipeline and Microservice during the presentation.
How do you build a Cloud Native Applications? So many cloud deployments are a lift and shift architecture, what would it look like if you started from scratch, only used cloud native technologies? During this session we will compare and contrast two applications, one built using a traditional Java application architecture, the other using a cloud native approach. How does building an app for the cloud change your architecture, application design, development and testing processes? We’ll look at all this and more.
During this session we’ll dive in to the details of Cloud Native applications, their origins and history. Then, look at what’s involved when you move from an on-prem data center to the cloud. Should you change your approach to application design now that you are in the cloud? If so, what does a cloud-based design look like.
During this session we’ll use a few example applications to see how their architecture and design choices impact their ability to maximize the power of the cloud.
By the end of the session, you’ll have a better understanding of the benefits of a cloud native application design, how to best leverage cloud capabilities, and how to create performant Microservices.
I hope you'll join me on this exciting survey of Serverless Computing. When you think of Serverless you probably think of Lambda's or Cloud Functions but there's so much more to the Serverless ecosystem. During this session will look at Serverless Computing in all its various forms and discuss why you might want to use a Serverless architecture and how it compares to other cloud environments. We'll look at common Serverless misconceptions and discuss when you should and shouldn't use Serverless.
Serverless is an exciting component of Cloud computing and it's a growing rapidly. During this session we'll look at all things Serverless and discuss how to incorporate it into your system architecture. We'll build a Lambda function during the presentation and compare the process to a typical Java application. Next, we will talk about the pros and cons of Serverless and when you should use.
There are a few Serverless frameworks available today to make building a function easier than ever. We'll look at a couple of these frameworks, build a local, Serverless function and deploy it to AWS (if the network cooperates). Finally, we'll talk about performance considerations, how to structure your Serverless functions, and how to perform safe l
A lot of software development seems to be concerned with finding answers; troubleshooting issues; solving problems. But what if we’re not asking the right questions? Learning to ask the right questions in the various contexts in which we work is one of the key, yet underemphasized skills that any competent engineer should have!
In this session, we’ll examine several critical questions that you should keep in your toolbox. You can use these questions to evaluate tool and technology selection decisions, architecture strategy, data modeling decisions, and more. You should walk away with this session feeling equipped to engage in “adult conversations” about software engineering, where we cut through the hype and focus on the tradeoff decisions before us.
So what are the questions? Well, you’ll just have to come to the session to find out. But if you browse through my Twitter stream, you just might find a few.
NOTE: THIS IS A DISCUSSION ORIENTED SESSION. THERE WILL BE MINIMAL LECTURE AND SLIDES.
According to Wikipedia, metaphysics is the branch of philosophy that studies the essence of a thing. This definition invites the question, “Does software design have an essence?” And if it does, would the discovery and understanding of this essence lead to a fundamental improvement in our ability to build well-designed software? Does design matter? What exactly is the design of software? Can we point to it? Or is it something immaterial?
These are the types of questions that I want to confront in this session. We’ll draw inspiration from various individuals, from Don Norman to Fred Brooks to Jack Reeves to Christopher Alexander. We’ll consider their explorations of design, both in and out of the software realm. But ultimately, we’re going to have a conversation inspired by the Dialogues of Plato, and each iteration of this session will arrive at its own conclusions.
All software architectures have to deal with stress. It’s simply the way the world works! Stressors come from multiple directions, including changes in the marketplace, business models, and customer demand, as well as infrastructure failures, improper or unexpected inputs, and bugs. As software architects, one of our jobs is to create solutions that meet both business and quality requirements while appropriately handling stress.
We typically approach stressors by trying to create solutions that are robust. Robust systems can continue functioning properly in the presence of internal and external challenges, but they also have one or more breaking points. When we pass a robust systems known threshold for a particular type of stress, it will fail. When a system encounters an “unknown unknown” challenge, it will usually not be robust!
Recent years have seen new approaches, including resilient, antifragile, and evolutionary architectures. All of these approaches emphasize the notion of adapting to changing conditions in order to not only survive stress but sometimes to benefit from it. In this class, we’ll examine together the theory and practice behind these architectural approaches.
As an architectural style, microservices are here to stay. They have crossed the proverbial chasm, and now it’s time to get to work. Microservices provide us with the ability to create truly evolutionary architectures composed of cohesive and autonomous components using well known and characterized distributed systems patterns.
As we create and compose components across the hard boundary of the network, we become deeply interested in establishing the correct boundaries and has resulted in renewed interest in system design and decomposition. Fortunately, the tried and true practices of Domain-Driven Design are available to us.
In this presentation, we will cover a distillation of strategic (bounded contexts, subdomains, context mapping) and tactical (aggregates, domain events) DDD techniques and demonstrate how they enable us to create effective event-driven microservices.
Connascence is a term coined by Meilir Page-Jones in his article “Comparing Techniques by
Means of Encapsulation and Connascence.” It is a software quality metric that attempts to measure coupling between entities in three dimensions: strength, locality, and degree. Rather than describing architectural coupling as simply loose or tight, connascence gives us a tool to describe and measure the huge diversity in coupling that we encounter in systems. Relatedly, it also gives us a tool for encouraging high cohesion within modules.
Connascence can also be divided into static and dynamic categories, with a long list of specific types such as connascence of name, meaning, timing and identity.
Page-Jones gives us a rule to follow, “Eliminate any unnecessary connascence and then minimize connascence across encapsulation boundaries by maximizing connascence within encapsulation boundaries,” which he says can be applied to “any future design paradigm with partitioning, encapsulation and visibility rules.”
In this session, we’ll examine connascence as it appears within architectural patterns such as REST, message-driven collaboration, and microservices, as well as within various serialization techniques. You’ll walk away with tools to help you better describe and manage coupling within your system architectures.
A real-world look at using Consumer Driven Contracts in practice. How to eliminate a test environment and how to build your services with CDC as a key component.
One of the biggest challenges in building out a Microservices architecture is integration testing. If you use a Continuous Delivery pipeline, none of your environments, stage or production, are even in a steady state. How do you perform adequate testing when your environment can change during your test? How do you manage a complex web of interdependent Microservices? How do you safely evolve your API in this environment?
Consumer Driven Contracts are a key component for a successful Microservices strategy. We'll look at different CDC frameworks and how to use them. We'll discuss developer workflows and how to ensure your API changes don't break client implementations. Finally, we'll build a couple of Microservices and walk through the lifecycle of Consumer Driven Contract tests.
Docker has revolutionized how we build and deploy applications. While Docker has revolutionized production, it's also had a huge impact on developer productivity. Anyone that's used Docker for an extensive period of time will tell you it's a blessing and a curse. Yes, it's portable but networking and other characteristics of Docker can make the most chill developer long for plain old Java. During this session we'll look at Docker's good points and its painful ones. The end goal - enable anyone on your team to go from zero to production in under 20 minutes.
This session will work through the evolution of a Java CRUD application, evolving it from a traditional on-prem application to a streamlined, Continuous Delivery ready, Cloud Native application. If you work in a polyglot environment, you know switching to a new service can be a difficult process. There are new tools to install, environments to setup, databases to use and so on. Docker can streamline this process and enable you to switch between systems quickly and easily.
By the end of this session, you'll have a pattern for creating team-friendly Microservices that work well in a Continuous Delivery Pipeline and can be deployed to any container environment. We'll use Docker to super-charge your development environment and enable you to move faster and safer than ever before.
Would Chuck Norris ask you to come hear him speak at a conference? No, he wouldn't. He would TELL you that you're coming, and then roundhouse kick you in the face if you gave him any more lip.
“What would Chuck Norris do?” is a philosophy this session will cover in depth. Other topics include: badass vs a-hole, human duck typing, the art of [not] caring, instrumentality, and what your facial hair says about you. You won't learn any new code in this session, but you might unleash a Pandora's box of awesomeness that will change the way you interact with your coworkers forever.
Many new features have been added between the last Long Term Support release in Java 8 and the current one in Java 11. This talk will summarize many of those capabilities, from the Jigsaw implementation of JPMS to unmodifiable collections to local variable type inference and more. In addition to the basic code changes, the new six-month release schedule and associated licensing issues will be reviewed.
If, as anticipated, Java 12 is released in March and Java 13 in September, new features from those versions will also be included, even though they will break the joke in the title of this talk.
JUnit 5 is a complete refactoring of the most well-known tool in all of testing, and the developers have done a remarkable job. The new JUnit 5 version is full of new features, updated semantics, and usability improvements. This talk will demonstrate all the new features, as well as some of the experimental ones in the pipeline.
JUnit has been remarkably stable over the years and is one of the most widely adopted frameworks in the Java world. The latest version, JUnit 5, takes JUnit to the next level. Full of new features like conditional test execution, parametric testing, labeling and filtering tests, and more, it brings all the modern thinking on testing into the JUnit world. It also takes advantage of the functional features added to Java since version 8 to create a powerful, new library for testing your code. This talk will show you how to adopt JUnit 5's new features while maintaining backward compatibility.
Gradle is the build tool of choice in the open source world, and rapidly becoming the standard in industry as well. Anyone who works with Gradle on a Java project knows the basics of the Java plugin and how to write simple tasks in Groovy. Gradle can do much more, however. This talk will demonstrate how to write your own custom task classes and how to create Gradle plugins from them. Other Gradle features will be demonstrated as well, including file manipulation, incremental builds, generating the Grade wrapper, and resolving conflicts in dependencies.
Gradle Inc also provides a free build scan capability to analyze build files. This too will be demonstrated, as well as profiling your build, determining dependencies, and more.
Microservices have helped us break apart back end services, but large front ends often remain problematic monoliths.
In this session you’ll learn how to apply the same concepts to large front-end applications, slicing them into end-to-end verticals. These verticals can then be owned by different teams and even written in different frameworks. Can Angular, React, and Vue all live together in harmony? How about AngularJS and Angular2+? With micro frontends, the answer is yes!
Git. It can be intimidating if you're accustomed to other kinds of source control management. Even if you're already using it and comfortable with the basics, situations can arise where you wish you understood it better. Developers often just want to write code and tell everyone else to take a hike, but the reality is that most of us work on teams where the feature-based code we write must be integrated, tested, and ultimately released.
This session will cover the most critical git concepts, basic and advanced, in a completely visualized way. At the same time, you’ll pick up git terminal commands to help you understand (or even eliminate) a git GUI you already use. Go beyond the basics to learn how to get yourself out of a git pickle, practical release management strategies, and more.
Learn the basic syntax and semantics for the Kotlin programming language. Kotlin is an alternative JVM language that provides null safety, static typing, and powerful IDE support. This talk will emphasize the relationships between Kotlin and Java, highlighting the differences in types, functional programming, collections, and more.
Demonstrations will include:
and much more.
Good discussions are supposed to diverge from their intended path. Free association is a feature, not a bug, and helps you see new connections between ideas. Without structure, however, it can be difficult to add context to new ideas and understand how they relate to more immediate problems. This talk discusses the technique of mental bookmarks – how to remember where you were when a discussion diverged. In addition to giving you a reputation for having an amazing memory, the skill also helps with personal awareness in general.
To give the technique context, we'll look at the fractal nature of success – the way we tend to see our current environment in relative terms, always comparing ourselves to those slightly more successful and slightly less successful.
Machine Learning is a huge, deep field. Come get a head start on how you can learn about how machines learn.
This talk will be an overview of the Machine Learning field. We’ll cover the various tools and techniques that are available to you to solve complex, data-driven problems. We’ll walk through the algorithms and apply them to some real but accessible problems so you can see them at work.
Documents contain a lot of information. We'll introduce you to a variety of techniques to extract them.
Machine Learning techniques are useful for analyzing numeric data, but they can also be useful for classifying text, extracting content and more. We will discuss a variety of open source tools for extracting the content, identifying elements and structure and analyzing the text can be used in distributed, microservice-friendly ways.
Deep Learning is an evolution of the capabilities of more conventional machine learning to take advantage of the extra data available from Big Data systems. With more data, many of the manual aspects of feature selection and other machine learning steps can be derived automatically. We will highlight many of the main deep learning frameworks and give you a hands on introduction to what is possible and how you can start to use them.
We will cover:
What comes after machine learning and deep learning? How about dynamic systems that need new ways of finding paths through complex scenarios such as video games, challenging board games and more.
In addition to covering the main ideas of deep reinforcement learning, we will cover some of the main tools and frameworks
While the Web itself has strong decentralized aspects to how it is used, the backend technologies are largely centralized. The naming systems, the routing systems and the traffic that all points back to the same place for a website are all centralized technologies. This creates both a liability as well as a control point.
In order to break free of some of these limitations, new technologies are emerging to provide a more decentralized approach to the Web.
This talk will walk you through some emerging technology to provide decentralized content storage and distribution, edge computing and more. We will touch upon the Interplanetary Filesystem, WebTorrent, Blockchain spin offs and more.
What happens if web applications got really fast?
The #1 fallacy of distributed computing is “The Network is Reliable.” Yet we still build web apps that 100% rely on a server and a network connection. What if we could build web apps that work, regardless of connection state? The promise and capabilities of Progressive Web Apps (PWAs) make this possible, and these capabilities are available today.
This session doesn't focus on a specific technology or backend technology; while those demonstrations are impressive, they are only useful for a handful of use-cases. Instead we look at architecture patterns and techniques that can work with any framework, any backend, and virtually any app.
In 2005 the way we built web applications changed when Google released Google Maps and the AJAX map canvas. This approach fundamentally changed how users expect to interact with web applications. Suddenly any app that sent postback after postback felt cumbersome and positively ancient.
Progressive web apps and offline capabilities are that next big shift. Soon any app that doesn't work offline is going to be as jarring and frustrating of a user experience as it would be to go back to a pre-ajax world. Don't let this be your app!
In tech teams it's a constant firefight. We react. Then we react to the reaction… the cycle continues. In all this noise, in all this chaos, how do we move forward. How do we remain proactive?
A great leader must be an enabler for the team. At times this means insulating the team from the noise. At other times it means improving the environment for the team. At all times, however, it requires setting clear priorities and conditions for success.
This session is focused on the art of moving forward in even the noisiest environments.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single meta-answer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
So you have some code and it is in a bounded context with a REST API. You are on your way to Microservices. Next you wrap it in a container and now it is an image that others can run. Simple. Now what? No service is an island. Your service needs to log information, needs to scale and load balance between its clones. Your service needs environment and metadata way outside its context. What about where the service will run? Who starts it? What monitors its health? What about antifragility? Updates? Networking? Oh my.
Services live in clusters and clusters live in data centers. Many concepts overlap with the features of cloud management. But don't get too flustered since, fundamentally, services are managed by clusters. There are several approaches to cluster management such as Docker Swarm, Mesos with Marathon and Kubernetes.
Minikube with Kubernetes is an approachable technique to set up a local cluster that is easy to understand and get started. Whether you have a simple service or a Web application with a set of services, you can develop much of it on Kubernetes with Minikube. We will run some practical examples. Once you understand the mechanics of the tools, we will explore how it works, sort through the terminology and share ideas about practical uses for this technology.
Afterward, you will understand how to run your personal cluster with your Linux, OS X or Windows laptop to further enjoy unraveling the mysteries of running applications in a cluster.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Understanding Kubernetes: Fundamentals
Aha moments with apps in containers can be quite liberating. The mobile space is saturated with “there's an app for that”. For us, we now expect “there's a container for that”. “Write once, run anywhere” (WORA) has changed to “Package once, run anywhere” (PORA). As the community of containers is riding up the hype curve we will look at some of those top aha moments together.
• Go rouge with Java 9 and jlink • Polyglot microservices • RabbitMQ broker in 2 minutes • Private Docker hub in a container • Composing a Pod with multple containers • Database flavors for integration testing
The epiphanies come from the modular simplicity. Leveraging namespaces and using cgroups, these apps share a common kernel without polluting the host OS. This simplifies installation, conflicts and uninstalls. The barriers to getting something running are decreased and normalized to a container run command. This is subtly powerful and liberating. With this simplicity comes complexity such as shared resources, file systems, mounts, networking and overall cluster management.
Your software package delivery and installation is no longer an rpm, deb, dmg, jar, war, native executable or a run script, it is simply an image that has a common run container command.
Lastly, we will explore how containers can help and hurt your team if you are not careful. What goes into a container is a reflection of a team's skills. Should a team make each tech stack different, or should you standardize? External processes affect your tech stack choices inside your container such as static code analysis, code beautifiers, CI/CD, tracing, logging, and monitoring. Exercise caution as standardization and frameworks can lead to coupling. Your tech stack details can change from version to version so get your SemVer and API versioning right. Finally, containers can be a vehicle to introduce new technologies to those that are conservative and risk avoiders.
During the presentation, we will explore some example on Katacoda.
There are a variety of ways to declare to Kubenetes what applications should be run and managed. Typically YAML declarations or Helm charts are sufficient to define your applications. However, a strong technique has emerged as a way to control your applications. When you leverage CustomResourceDefinitions (CRDs) and controllers together they make up the Operator pattern.
Learning to use Operators should be part of your architecture toolbox when hosting professional solutions using Kubernetes.
In this session, you will see how this pattern emerged. We will explore how the architecture works natively on Kubernetes. We will see open source Operators are currently available for you to use. The session will conclude by exploring how to write your own Operators using the Operator SDK.
Learn what Operators are all about and how you can leverage them for your solutions.
Kubernetes out of the box is a strong platform for running and coordinating large collections of services, containers, and applications. As is, Kubernetes is powerful for many solutions.
Remember Underdog? He was a mild-mannered dog, but when stress and conflict were introduced to the plot he took a magic pill, he became a superhero. Istio is a superhero for Kubernetes.
Istio is an open, platform-independent service mesh that manages communications between services in a transparent way. With a distributed architecture many things can fail, slow down and become less secure. Istio provides solutions to those stresses in our plot toward this architecture style:
• Traffic management
• Policy enforcement
• Service identity and security
We will explore these benefits with Istio through some working examples on Kubernetes. The key point is this meshing solution allows your code to be less coupled to the inherent weaknesses of a distributed platform.
Three evolutionary ecosystems work well together Java, Containers, and Kubernetes.
A strong foundation of Java has always been write once, run anywhere, modularity, and polyglot. Containers and Kubernetes echo the exact same admirable qualities. Many of the patterns found in Java expand into Kubernetes as you massively scale your applications across the resources in your datacenter.
Containers entice us to pick up other languages and tools beyond the JVM. These new opportunities for additional languages and tools in containers need to be embraced. Does that mean you need to abandon Java? No, and with versions 9, 10, 11, 12… it continues to evolve, by design. The JVM continues as a first-class solution for your cloud-native adventures.
Granted, Java <= 8 had a bumpy start with containers, but we have moved on from that.
In this presentation, we see how to recent versions of Java run very effectively in containers, ideal for your Kubernetes based solutions.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Understanding Kubernetes: Fundamentals.
Highly cohesive and loosely coupled business functions can have a great impact on your agility to deliver new features. Microservices in containers is an effective implementation detail for continuous delivery. However, before you bite into that big sandwich, consider how provisioning a variety of data flavors as containerized endpoints could greatly improve your internal testing.
How many times have you heard a colleague say, “Well that feature does not have integration tests because it requires a database with some specialized data”? Balderdash - put your data flavors in containers!
Let's explore a solution to create a pipeline of data flavors. We use Docker images, Kubernetes Pods, Minikube to provision these endpoints. See how a Gradle project drives integration tests against these Pod endpoints, all ready for your continuous integration pipeline. In the end you can see the power of Consumer Driven Contracts against your dataset flavors.
Watch how your team becomes empowered to create their own dataset flavors in containers for development and testing. See a wall to integration testing come down.
We will explore a GitHub Gradle-based project that includes:
This presentation will follow the source code found here.
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
There are many good reasons to use a microservices architecture. But there are no free lunches. The positives of microservices come with added complexity. Teams should happily take on that complexity…provided the application in question benefits from the upside of microservices. This talk will cut through the hype to help you make the right choice for your unique situation.
Every organization has at least a phalanx or two in the “Cloud” and it is, understandably changing the way we architect our systems. But your application portfolio is full of “heritage” systems that hail from the time before everything was as a service. Not all of those applications will make it to the valley beyond, how do you grapple with your legacy portfolio? This talk will explore the strategies, tools and techniques you can apply as you evolve towards a cloud native future.
In this talk, you will learn:
Development teams often focus on getting code to production losing site of what comes after the design and build phase. But we must consider the full life cycle of our systems from inception to deployment through to sunset, a discipline many companies refer to as site reliability engineering.
While your organization may or may not have an SRE team, you have someone playing that role and we can all benefit from looking at the principles and practices that we can bring to bear on our projects. In this talk, I will introduce the concepts of SRE and how you can adopt these ideas on your applications.
Rich Hickey once said programmers know the benefits of everything and the trade offs of nothing…an approach that can lead a project down a path of frustrated developers and unhappy customers. As architects though, we must consider the trade offs of every new library, language, pattern or approach and quickly make decisions often with incomplete information. How should we think about the inevitable technology choices we have to make on a project? How do we balance competing agendas? How do we keep our team happy and excited without chasing every new thing that someone finds on the inner webs?
As architects it is our responsibility to effectively guide our teams on the technology journey. In this talk I will outline the importance of trade offs, how we can analyze new technologies and how we can effectively capture the inevitable architectural decisions we will make. I will also explore the value of fitness functions as a way of ensuring the decisions we make are actually reflected in the code base.
In this session we will build a full application using Vue.js. We will start by discussing how you can start working with Vue, all the way to seeing what it takes to build an app with Vue, including state management and routing.
If you are curious about Vue, this is the session for you. Come in for 180 minutes of a thrill ride as we explore this fascinating new framework and mindset.
In this session we will build a full application using Vue.js. We will start by discussing how you can start working with Vue, all the way to seeing what it takes to build an app with Vue, including state management and routing.
If you are curious about Vue, this is the session for you. Come in for 180 minutes of a thrill ride as we explore this fascinating new framework and mindset.
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
An overview of various popular streaming technologies on the JVM: Kafka Streams, Apache Storm, Spark Streaming, Apache Beam. Discuss “the bill of rights” of what to expect of all streaming libraries and frameworks, security, failover, exactly once processing.
Streaming is now an essential part of our life. We have cheaper drives, faster networks, and more memory. We can haul tons of data, but we need to process that data, manipulate and enrich. To do so we need some sort of streaming solution. Let's look at the most common ones and expose the differences and similarities between frameworks so you, the attendee, can make a better decision.
Kafka is more than just a messaging queue with storage. It goes beyond that and with technology from Confluent open source it has become a full-fledged data ETL and data streaming ecosystem.
When we utter the words, Kafka, it no longer is just one component but can be an entire data pipeline ecosystem to transform and enrich data from source to sink. It offers different ways to handle that data as well. In this presentation, we will discuss briefly a variety of technologies outside of Kafka Streams and KSQL from the Confluent stack that contributes to the overall streaming experience: Kafka Connect and Registry Schema
hashCode in a single class. Can you change implementations on the spot? Probably not, there may be too many dependencies on your implementation. Time to break out an adapter pattern, a utility class, or better yet, a type class! A type class is a kind of template in very static functional programming languages. Imagine a template that can read, write information as a side effect as well? Type classes are powerful.
For these various type-classes, we will be looking at a project called TypeLevel Cats. TypeLevel is a group of projects that adhere to a code of conduct, modular systems, static, functional, open source programming. TypeLevel Cats is the flagship project for Typelevel.
Our presentation will be following this story:
Even if you are not a Scala Programmer, you may want to come in and see how type classes work because I am making a bet… This will be something that will be used by other JVM languages in the future. Kotlin doesn't have it now, Groovy doesn't have it now, TypeScript doesn't either although some projects are working towards this idea.
For those still grappling with Generics? This will be an attempt to clear the air about generics. What are wildcards? What is extends? What is super? What is covariance? What is contravariance? What is invariance? What is erasure? Why and when do I need this?
One of the more pain items in any statically typed language on the JVM is generics or parameterized type. This presentation is set to overcome some of these hurdles and understand some of these terms that often vex us. We will cover:
In this example-driven presentation, you'll learn how to leverage Spring Boot to accelerate application development, enabling you to focus coding on logic that drives application requirements with little concern for code that satisfies Spring's needs.
For over a decade, Spring has sought to make enterprise Java development easier. It began by offering a lighter alternative to EJBs, but continued to to address things such as security, working with various sorts of databases, cloud-native applications, and reactive programming. And, along the way, Spring even took steps to make itself easier to use, offering Java-based and automatic component configuration. Even so, there's still a lot of near-boilerplate code required to develop Spring applications.
Enter Spring Boot. Spring Boot's primary purpose is to make Spring easier to work with. It achieves this in three ways:
All together, Spring Boot lets you focus on fulfilling your application's requirements without worrying about writing code that satisfies the needs of a framework.
In this session, you'll learn how to take your Spring Boot skills to the next level, applying the latest features of Spring Boot. Topics may include Spring Boot DevTools, configuration properties and profiles, customizing the Actuator, and crafting your own starters and auto-configuration.
In this session, we'll explore Spring Security and OAuth2, including building an OAuth2 authorization server, fronting an API with a resource server, and verifying an OAuth2 access token's claims to ensure that the client is allowed to access the resource they are asking for.
OAuth2 offers a means by which a client application can request authorization to access a resource and be given an access token that must be presenting when making HTTP requests. This involves creating an authorization server that issues tokens and defining a resource server which acts as a wall around an API that verifies the presented access token's claims before allowing the request to proceed.
Spring Security has historically supported OAuth2 as part of a separate project called Spring Security for OAuth. But gradually, Spring's OAuth2 support is moving into the main Spring Security project.
In this example-driven session, we'll explore the Alexa Skills Kit (ASK) and see how to develop skills for Amazon's Alexa. You'll learn how to use the ASK CLI to jumpstart skill development and how to create conversational applications in NodeJS.
The way we communicate with our applications is an ever-evolving experience. Punch cards gave way to keyboards. Typing on keyboards was then supplemented by pointing and clicking with a mouse. And touch screens on our phones, tablets, and computers are now a common means of communicating with applications.
These all lack one thing, however: They aren’t natural.
As humans, we often communicate with each other through speech. If you were to walk up to another human and start tapping them, you’d likely be tapped (or punched) in response. But when we talk to our applications, we communicate on the machine’s terms, with keyboards, mice, and touch screens. Even though we may use these same devices to communicate with other humans, it’s really the machine we are communicating with—and those machines relay what we type, click, and tap to another human using a similar device.
Voice user-interfaces (Voice UIs) enable us to communicate with our application in a human way. They give our applications the means to communicate to us on our terms, using voice. With a voice UI, we can converse with our applications in much the same way we might talk with our friends.
Voice UIs are truly the next logical step in the evolution of human-computer interaction. And this evolutionary step is long overdue. For as long as most of us can remember, science fiction has promised us the ability to talk to our computers. The robot from Lost in Space, the Enterprise computer on Star Trek, Iron Man’s Jarvis, and HAL 9000 (okay, maybe a bad example) are just a few well-recognized examples of science fiction promising a future where humans and computers would talk to each other.
Our computers are far more powerful today than the writers of science fiction would have imagined. And the tablet that Captain Picard used in his ready room on Star Trek: The Next Generation is now available with the iPad and other tablet devices. But only recently have voice assistants such as Alexa and Google Assistant given us the talking computer promised to us by science-fiction.
There's nothing new or exciting about relational databases. We abstract them away with ORMS, grudgingly write a query here or there, but generally try to forget about them entirely. Then the performance and scalability problems begin. “Shading, the secret ingredient to the web-scale sauce” often won't help us.
The database is at the heart of nearly every system we build. Reading data and writing data account for the majority of performance bottlenecks. When it comes to SQL and relational databases, the syntax is easy, but the concepts often aren't. The most important knowledge is not obvious but it is necessary to make the right design, query, and optimization decisions.
Indexing, a glimpse under the hood of the storage engine and the query optimizer, and some best practices are all you need to know bring your DB skills head and shoulders above your peers and ready to build bigger, better, faster apps.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several real-world patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the team-and management-on board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higher-up in the organization.
Great leaders inspire, excite, and empower those in their teams. These leaders help create a team that is more than the sum of it's parts; in short, a great leader can be a force multiplier for the team.
But what makes these force multipliers? Is it simply raw talent? Charisma? How are these leaders different from the bad leaders who become bottlenecks and roadblocks?
In this session, we explore the answer to that question and identify the skills and principles that create force multipliers. Put these skills into action and you can be one too!
If you’ve spent any amount of time in the software field, you’ve undoubtably found yourself in a (potentially heated) discussion about the merits of one technology, language or framework versus another. And while you may have enjoyed the technical debate, as software professionals, we owe it to our customers (as well as our future selves) to make good decisions when it comes to picking one technology over another.
In this talk, I will explore what criteria we should consider when comparing technologies, how we can avoid burning platforms as well as what to do when we’ve reached a dead end. We will also apply these techniques to a current technology or two.
In this example-driven presentation, we'll look at Spring Data REST, an extension to Spring Data that exposes your data repositories as a RESTful API, complete with hypermedia links. We'll start with essential Spring Data REST, but then go beyond the basics to see how to customize the resulting API to be more than just CRUD operations over HTTP.