A down in the trenches look at building, running and day-to-day development with a Continuous Delivery pipeline. This talk is based on my experiences building multiple CD pipelines and optimizing developer workflows to push changes to production all day. I'll walk you through how we transformed a two-day deployment process into a 20-minute CD pipeline and then go on to perform more than 20,000 deployments.
During this presentation we'll walk through the evolution of a team teetering on collapse. Production deployments are a long running ceremony that hasn't really changed in years. Deployments are risky and everyone involved with the project acts accordingly, deployments can take days and the company website has scheduled maintenance windows.
Over several months, the team will transform into a model of agile process mastery. Deployments will take minutes instead of days. The team's structure and concerns over deploying to production will also change shape.
During this talk we'll dig into the anatomy of a continuous delivery pipeline, what it is, how it works, and the challenges you'll face making the transition. Where do you start and what are the big four considerations of continuous delivery? Do you need company buy-in or can you start small and grow out to the rest of the organization?
We'll walk through the entire process, talk about team organization, breaking up the monolith, your first steps towards CD, identifying your primary objective, the building blocks of a Microservices architecture, the psychology of continuous delivery, how to write effective code in a CD ecosystem, and we'll build a continuous delivery pipeline and Microservice during the presentation.
How do you build a Cloud Native Applications? So many cloud deployments are a lift and shift architecture, what would it look like if you started from scratch, only used cloud native technologies? During this session we will compare and contrast two applications, one built using a traditional Java application architecture, the other using a cloud native approach. How does building an app for the cloud change your architecture, application design, development and testing processes? We’ll look at all this and more.
During this session we’ll dive in to the details of Cloud Native applications, their origins and history. Then, look at what’s involved when you move from an on-prem data center to the cloud. Should you change your approach to application design now that you are in the cloud? If so, what does a cloud-based design look like.
By the end of the session, you’ll have a better understanding of the benefits of a cloud native application design, how to best leverage cloud capabilities, and how to create performant Microservices.
I hope you'll join me on this exciting survey of Serverless Computing. When you think of Serverless you probably think of Lambda's or Cloud Functions but there's so much more to the Serverless ecosystem. During this session will look at Serverless Computing in all its various forms and discuss why you might want to use a Serverless architecture and how it compares to other cloud services.
Serverless is an exciting component of Cloud computing and it's a growing rapidly. During this session we'll look at all things Serverless and discuss how to incorporate it into your system architecture. We'll build a Lambda function during the presentation and talk about the pros and cons of Serverless and when you should use Serverless systems.
There are a few Serverless frameworks available today to make building a function easier than ever. We'll look at a couple of these frameworks, build a local, Serverless function and deploy it to AWS (if the network cooperates). Finally, we'll talk about performance considerations, how to structure your Serverless functions, and how to perform safe l
A real-world look at using Consumer Driven Contracts in practice. How to eliminate a test environment and how to build your services with CDC as a key component.
One of the biggest challenges in building out a Microservices architecture is integration testing. If you use a Continuous Delivery pipeline, none of your environments, stage or production, are even in a steady state. How do you perform adequate testing when your environment can change during your test? How do you manage a complex web of interdependent Microservices? How do you safely evolve your API in this environment?
Consumer Driven Contracts are a key component for a successful Microservices strategy. We'll look at different CDC frameworks and how to use them. We'll discuss developer workflows and how to ensure your API changes don't break client implementations. Finally, we'll build a couple of Microservices and walk through the lifecycle of Consumer Driven Contract tests.
Docker has revolutionized how we build and deploy applications. While Docker has revolutionized production, it's also had a huge impact on developer productivity. Anyone that's used Docker for an extensive period of time will tell you it's a blessing and a curse. Yes, it's portable but networking and other characteristics of Docker can make the most chill developer long for plain old Java. During this session we'll look at Docker's good points and how to tackle the difficult areas. The end goal - enable anyone on your team to go from zero to productive in under 20 minutes.
This session will should you how to structure a Java CRUD application that leverages Docker to enable rapid developer onboarding, schema migrations, and utilize common cloud services (like Pub/Sub); all from your laptop. This setup will enable you to build a streamlined, Continuous Delivery ready, Cloud Native application, the same configuration that enables local development will supercharge your CI/CD pipeline.
If you work in a polyglot environment, you know switching to a new service can be a difficult process. There are new tools to install, environments to setup, databases to use and so on. Docker can streamline this process and enable you to switch between services quickly and easily.
By the end of this session, you'll have a pattern for creating team friendly Microservices that works well in a Continuous Delivery Pipeline and can be deployed to any container environment. Docker will enable you to build, test and deploy your code faster and safer than ever before.
Would Chuck Norris ask you to come hear him speak at a conference? No, he wouldn't. He would TELL you that you're coming, and then roundhouse kick you in the face if you gave him any more lip.
“What would Chuck Norris do?” is a philosophy this session will cover in depth. Other topics include: badass vs a-hole, human duck typing, the art of [not] caring, instrumentality, and what your facial hair says about you. You won't learn any new code in this session, but you might unleash a Pandora's box of awesomeness that will change the way you interact with your coworkers forever.
Many new features have been added between the last Long Term Support release in Java 8 and the current one in Java 11. This talk will summarize many of those capabilities, from the Jigsaw implementation of JPMS to unmodifiable collections to local variable type inference and more. In addition to the basic code changes, the new six-month release schedule and associated licensing issues will be reviewed.
If, as anticipated, Java 12 is released in March and Java 13 in September, new features from those versions will also be included, even though they will break the joke in the title of this talk.
The fundamental testing libraries in Java have undergone complete redesigns in the past few years. JUnit 5, known as JUnit Jupiter, redesigns the most well-known tool in all of testing. This talk will demonstrate the new features, how they are intended to be used, and discuss experimental ideas in the pipeline.
JUnit has been remarkably stable over the years and is one of the most widely adopted frameworks in the Java world. The latest version, JUnit 5, takes JUnit to the next level. Full of new features like conditional test execution, parametric testing, labeling and filtering tests, and more, it brings all the modern thinking on testing into the JUnit world. It also takes advantage of the functional features added to Java since version 8 to create a powerful, new library for testing your code.
Gradle is the build tool of choice in the open source world, and rapidly becoming the standard in industry as well. Anyone who works with Gradle on a Java project knows the basics of the Java plugin and how to write simple tasks in Groovy. Gradle can do much more, however. This talk will demonstrate how to write your own custom task classes and how to create Gradle plugins from them. Other Gradle features will be demonstrated as well, including file manipulation, incremental builds, generating the Grade wrapper, and resolving conflicts in dependencies.
Gradle Inc also provides a free build scan capability to analyze build files. This too will be demonstrated, as well as profiling your build, determining dependencies, and more.
Git. It can be intimidating if you're accustomed to other kinds of source control management. Even if you're already using it and comfortable with the basics, situations can arise where you wish you understood it better. Developers often just want to write code and tell everyone else to take a hike, but the reality is that most of us work on teams where the feature-based code we write must be integrated, tested, and ultimately released.
This session will cover the most critical git concepts, basic and advanced, in a completely visualized way. At the same time, you’ll pick up git terminal commands to help you understand (or even eliminate) a git GUI you already use. Go beyond the basics to learn how to get yourself out of a git pickle, practical release management strategies, and more.
Learn the basic syntax and semantics for the Kotlin programming language. Kotlin is an alternative JVM language that provides null safety, static typing, and powerful IDE support. This talk will emphasize the relationships between Kotlin and Java, highlighting the differences in types, functional programming, collections, and more.
Demonstrations will include:
and much more.
Good discussions are supposed to diverge from their intended path. Free association is a feature, not a bug, and helps you see new connections between ideas. Without structure, however, it can be difficult to add context to new ideas and understand how they relate to more immediate problems. This talk discusses the technique of mental bookmarks – how to remember where you were when a discussion diverged. In addition to giving you a reputation for having an amazing memory, the skill also helps with personal awareness in general.
To give the technique context, we'll look at the fractal nature of success – the way we tend to see our current environment in relative terms, always comparing ourselves to those slightly more successful and slightly less successful.
Machine Learning is a huge, deep field. Come get a head start on how you can learn about how machines learn.
This talk will be an overview of the Machine Learning field. We’ll cover the various tools and techniques that are available to you to solve complex, data-driven problems. We’ll walk through the algorithms and apply them to some real but accessible problems so you can see them at work.
Documents contain a lot of information. We'll introduce you to a variety of techniques to extract them.
Machine Learning techniques are useful for analyzing numeric data, but they can also be useful for classifying text, extracting content and more. We will discuss a variety of open source tools for extracting the content, identifying elements and structure and analyzing the text can be used in distributed, microservice-friendly ways.
Deep Learning is an evolution of the capabilities of more conventional machine learning to take advantage of the extra data available from Big Data systems. With more data, many of the manual aspects of feature selection and other machine learning steps can be derived automatically. We will highlight many of the main deep learning frameworks and give you a hands on introduction to what is possible and how you can start to use them.
We will cover:
What comes after machine learning and deep learning? How about dynamic systems that need new ways of finding paths through complex scenarios such as video games, challenging board games and more.
In addition to covering the main ideas of deep reinforcement learning, we will cover some of the main tools and frameworks
While the Web itself has strong decentralized aspects to how it is used, the backend technologies are largely centralized. The naming systems, the routing systems and the traffic that all points back to the same place for a website are all centralized technologies. This creates both a liability as well as a control point.
In order to break free of some of these limitations, new technologies are emerging to provide a more decentralized approach to the Web.
This talk will walk you through some emerging technology to provide decentralized content storage and distribution, edge computing and more. We will touch upon the Interplanetary Filesystem, WebTorrent, Blockchain spin offs and more.
What happens if web applications got really fast?
We are increasingly able to do more in the browser because of faster networks, optimized JavaScript engines, new standard APIs and more. There is a new initiative to allow a binary format called WebAssembly that will provide a compiled, cross-platform representation that will take us to the next level. Complex business applications and 3D video games will alike will benefit from this new standard. Come hear about what it can do for you.
The #1 fallacy of distributed computing is “The Network is Reliable.” Yet we still build web apps that 100% rely on a server and a network connection. What if we could build web apps that work, regardless of connection state? The promise and capabilities of Progressive Web Apps (PWAs) make this possible, and these capabilities are available today.
This session doesn't focus on a specific technology or backend technology; while those demonstrations are impressive, they are only useful for a handful of use-cases. Instead we look at architecture patterns and techniques that can work with any framework, any backend, and virtually any app.
In 2005 the way we built web applications changed when Google released Google Maps and the AJAX map canvas. This approach fundamentally changed how users expect to interact with web applications. Suddenly any app that sent postback after postback felt cumbersome and positively ancient.
Progressive web apps and offline capabilities are that next big shift. Soon any app that doesn't work offline is going to be as jarring and frustrating of a user experience as it would be to go back to a pre-ajax world. Don't let this be your app!
In tech teams it's a constant firefight. We react. Then we react to the reaction… the cycle continues. In all this noise, in all this chaos, how do we move forward. How do we remain proactive?
A great leader must be an enabler for the team. At times this means insulating the team from the noise. At other times it means improving the environment for the team. At all times, however, it requires setting clear priorities and conditions for success.
This session is focused on the art of moving forward in even the noisiest environments.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single metaanswer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
You have some modular code with a REST API. You are on your way to Microservices. Next, you package it in a container image that others can run. Simple. Now what? Your service needs to log information, needs to scale and load balance between its clones. Your service needs environment and metadata way outside its context. What about where the service will run? Who starts it? What monitors its health? What about antifragility? Updates? Networking? Oh my.
Don't get flustered. We will explore how Kubernetes simplifies the complexity of distributed computing.
This session will help you understand the terms, architecture and the mechanics of the Kubernetes tools. You will understand how to target your applications to a seemingly complex distributed compute platform.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts (1 of 2)
Aha moments with apps in containers can be quite liberating. The mobile space is saturated with “there's an app for that”. For us, we now expect “there's a container for that”. “Write once, run anywhere” (WORA) has changed to “Package once, run anywhere” (PORA). The growing community of containers is riding up the hype curve. We will look at many ways to assemble pods using architecture patterns you already know.
Your software package delivery and installation is no longer an rpm, deb, dmg, jar, war, native executable or a run script, it is simply an image that has a common run container command.
During the presentation, we will explore some examples on Katacoda.
Kubernetes is a strong platform for running and coordinating large collections of services, containers, and applications, but even with all of its standard resources, Kubernetes can't do everything. Fortunately, Kubernetes is highly configurable and extensible. The Operator pattern has emerged as an important extension technique.
We’ll break down what this pattern is all about. There are public operators, there are operator registries, and there are frameworks so you can write your own operators. Beyond the Hello World examples, you'll soon realize Operators are important and how you can get started with them.
Kubernetes out of the box is a strong platform for running and coordinating large collections of services, containers, and applications. As is, Kubernetes is powerful for many solutions.
Remember Underdog? He was a mild-mannered dog, but when stress and conflict were introduced to the plot he took a magic pill, he became a superhero. Istio is a superhero for Kubernetes.
Istio is an open, platform-independent service mesh that manages communications between services in a transparent way. With a distributed architecture many things can fail, slow down and become less secure. Istio provides solutions to those stresses in our plot toward this architecture style:
• Traffic management
• Observability
• Policy enforcement
• Service identity and security
We will explore these benefits with Istio through some working examples on Kubernetes. The key point is this meshing solution allows your Java code to be less coupled to the inherent weaknesses of a distributed platform.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts
At the 2009 Agile conference, J.B.Rainsberger declared “Integration tests are a scam”. I agree. Come see some compelling reasons why consumer-driven contract testing is a much better approach. Particularly for microservices.
We will explore different testing techniques on Kubernetes, including an important one called “Consumer-Driven Contracts”.
After a brief overview of the concepts a live demonstration will show you how to:
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
There are many good reasons to use a microservices architecture. But there are no free lunches. The positives of microservices come with added complexity. Teams should happily take on that complexity…provided the application in question benefits from the upside of microservices. This talk will cut through the hype to help you make the right choice for your unique situation.
Every organization has at least a phalanx or two in the “Cloud” and it is, understandably changing the way we architect our systems. But your application portfolio is full of “heritage” systems that hail from the time before everything was as a service. Not all of those applications will make it to the valley beyond, how do you grapple with your legacy portfolio? This talk will explore the strategies, tools and techniques you can apply as you evolve towards a cloud native future.
In this talk, you will learn:
Development teams often focus on getting code to production losing site of what comes after the design and build phase. But we must consider the full life cycle of our systems from inception to deployment through to sunset, a discipline many companies refer to as site reliability engineering.
While your organization may or may not have an SRE team, you have someone playing that role and we can all benefit from looking at the principles and practices that we can bring to bear on our projects. In this talk, I will introduce the concepts of SRE and how you can adopt these ideas on your applications.
Rich Hickey once said programmers know the benefits of everything and the trade offs of nothing…an approach that can lead a project down a path of frustrated developers and unhappy customers. As architects though, we must consider the trade offs of every new library, language, pattern or approach and quickly make decisions often with incomplete information. How should we think about the inevitable technology choices we have to make on a project? How do we balance competing agendas? How do we keep our team happy and excited without chasing every new thing that someone finds on the inner webs?
As architects it is our responsibility to effectively guide our teams on the technology journey. In this talk I will outline the importance of trade offs, how we can analyze new technologies and how we can effectively capture the inevitable architectural decisions we will make. I will also explore the value of fitness functions as a way of ensuring the decisions we make are actually reflected in the code base.
In this session we will build a full application using Vue.js. We will start by discussing how you can start working with Vue, all the way to seeing what it takes to build an app with Vue, including state management and routing.
Note: We'll be covering Vue version 3
Vue.js, the new kid on the JavaScript framework block is taking the world by storm. Vue has bypassed React in their count of Github stars, alluding to how popular this framework is starting to become. Vue attempts to provide just enough support with libraries like Vuex and the Vue Router, and tooling like the Vue CLI to get developers productive, without aiming to be too opinionated, and too flexible.
If you are curious about Vue, this is the session for you. Come in for 180 minutes of a thrill ride as we explore this fascinating new framework and mindset.
Note: We'll be covering Vue version 3
In this session we will build a full application using Vue.js. We will start by discussing how you can start working with Vue, all the way to seeing what it takes to build an app with Vue, including state management and routing.
Note: We'll be covering Vue version 3
Vue.js, the new kid on the JavaScript framework block is taking the world by storm. Vue has bypassed React in their count of Github stars, alluding to how popular this framework is starting to become. Vue attempts to provide just enough support with libraries like Vuex and the Vue Router, and tooling like the Vue CLI to get developers productive, without aiming to be too opinionated, and too flexible.
If you are curious about Vue, this is the session for you. Come in for 180 minutes of a thrill ride as we explore this fascinating new framework and mindset.
Note: We'll be covering Vue version 3
JavaScript will celebrate it's 24th birthday in 2020. For a language that has been around for such a while it has seen very few, if any changes to the language itself. Well all that is about to change with ECMAScript.next (or ECMAScript 6). ECMAScript 6 modernizes JavaScript syntax, while bringing in features such as modules for better namespacing, class as a first class construct, and a variety of additional operators thus ensuring that JavaScript is ready for the next era of large scale modern web applications. ES 7, 8, 9 and now 10 all use the features introduced by ES6 to further the language.
In this session we will take a look at some of the features that ECMAScript 6 / 7 / 8 / 9 and 10 bring to the table. We will take an exploratory approach, and by the end of 3 hours, you will be well versed with ALL of the new features in JavaScript.
JavaScript will celebrate it's 24th birthday in 2020. For a language that has been around for such a while it has seen very few, if any changes to the language itself. Well all that is about to change with ECMAScript.next (or ECMAScript 6). ECMAScript 6 modernizes JavaScript syntax, while bringing in features such as modules for better namespacing, class as a first class construct, and a variety of additional operators thus ensuring that JavaScript is ready for the next era of large scale modern web applications. ES 7, 8, 9 and now 10 all use the features introduced by ES6 to further the language.
In this session we will take a look at some of the features that ECMAScript 6 / 7 / 8 / 9 and 10 bring to the table. We will take an exploratory approach, and by the end of 3 hours, you will be well versed with ALL of the new features in JavaScript.
An overview of various popular streaming technologies on the JVM: Kafka Streams, Apache Storm, Spark Streaming, Apache Beam. Discuss “the bill of rights” of what to expect of all streaming libraries and frameworks, security, failover, exactly once processing.
Streaming is now an essential part of our life. We have cheaper drives, faster networks, and more memory. We can haul tons of data, but we need to process that data, manipulate and enrich. To do so we need some sort of streaming solution. Let's look at the most common ones and expose the differences and similarities between frameworks so you, the attendee, can make a better decision.
Kafka is more than just a messaging queue with storage. It goes beyond that and with technology from Confluent open source it has become a full-fledged data ETL and data streaming ecosystem.
When we utter the words, Kafka, it is no longer just one component but can be an entire data pipeline ecosystem to transform and enrich data from source to sink. It offers different ways to handle that data as well. In this presentation, we define:
We then discuss KSQLDB. A SQL layer built upon Kafka Streams that provides a simple query language to perform streaming operations
Imagine toString
, equals
, and hashCode
in a single class. Can you change implementations on the spot? Probably not, there may be too many dependencies on your implementation. Time to break out an adapter pattern, a utility class, or better yet, a type class! A type class is a kind of template in very static functional programming languages. Imagine a template that can read, write information as a side effect as well? Type classes are powerful.
For these various type-classes, we will be looking at a project called TypeLevel Cats. TypeLevel is a group of projects that adhere to a code of conduct, modular systems, static, functional, open source programming. TypeLevel Cats is the flagship project for Typelevel.
Our presentation will be following this story:
Even if you are not a Scala Programmer, you may want to come in and see how type classes work because I am making a bet… This will be something that will be used by other JVM languages in the future. Kotlin doesn't have it now, Groovy doesn't have it now, TypeScript doesn't either although some projects are working towards this idea.
In this example-driven presentation, you'll learn how to leverage Spring Boot to accelerate application development, enabling you to focus coding on logic that drives application requirements with little concern for code that satisfies Spring's needs.
For over a decade, Spring has sought to make enterprise Java development easier. It began by offering a lighter alternative to EJBs, but continued to to address things such as security, working with various sorts of databases, cloud-native applications, and reactive programming. And, along the way, Spring even took steps to make itself easier to use, offering Java-based and automatic component configuration. Even so, there's still a lot of near-boilerplate code required to develop Spring applications.
Enter Spring Boot. Spring Boot's primary purpose is to make Spring easier to work with. It achieves this in three ways:
All together, Spring Boot lets you focus on fulfilling your application's requirements without worrying about writing code that satisfies the needs of a framework.
In this session, you'll learn how to take your Spring Boot skills to the next level, applying the latest features of Spring Boot. Topics may include Spring Boot DevTools, configuration properties and profiles, customizing the Actuator, and crafting your own starters and auto-configuration.
TBD
In this session, we'll explore Spring Security and OAuth2, including building an OAuth2 authorization server, fronting an API with a resource server, and verifying an OAuth2 access token's claims to ensure that the client is allowed to access the resource they are asking for.
Securing REST APIs presents some unique challenges as compared to securing a typical web application. The client of any REST endpoint may not even be a user in the traditional sense, but is more likely to be another application or a browser-based Javascript client. How can you ensure that the clients of your REST API are allowed to access the resources they are asking for?
OAuth2 offers a means by which a client application can request authorization to access a resource and be given an access token that must be presenting when making HTTP requests. This involves creating an authorization server that issues tokens and defining a resource server which acts as a wall around an API that verifies the presented access token's claims before allowing the request to proceed.
Spring Security has historically supported OAuth2 as part of a separate project called Spring Security for OAuth. But gradually, Spring's OAuth2 support is moving into the main Spring Security project.
The way we communicate with our applications is an ever-evolving experience. Punch cards gave way to keyboards. Typing on keyboards was then supplemented by pointing and clicking with a mouse. And touch screens on our phones, tablets, and computers are now a common means of communicating with applications.
These all lack one thing, however: They aren’t natural.
As humans, we often communicate with each other through speech. If you were to walk up to another human and start tapping them, you’d likely be tapped (or punched) in response. But when we talk to our applications, we communicate on the machine’s terms, with keyboards, mice, and touch screens. Even though we may use these same devices to communicate with other humans, it’s really the machine we are communicating with—and those machines relay what we type, click, and tap to another human using a similar device.
Voice user-interfaces (Voice UIs) enable us to communicate with our application in a human way. They give our applications the means to communicate to us on our terms, using voice. With a voice UI, we can converse with our applications in much the same way we might talk with our friends.
Voice UIs are truly the next logical step in the evolution of human-computer interaction. And this evolutionary step is long overdue. For as long as most of us can remember, science fiction has promised us the ability to talk to our computers. The robot from Lost in Space, the Enterprise computer on Star Trek, Iron Man’s Jarvis, and HAL 9000 (okay, maybe a bad example) are just a few well-recognized examples of science fiction promising a future where humans and computers would talk to each other.
Our computers are far more powerful today than the writers of science fiction would have imagined. And the tablet that Captain Picard used in his ready room on Star Trek: The Next Generation is now available with the iPad and other tablet devices. But only recently have voice assistants such as Alexa and Google Assistant given us the talking computer promised to us by science-fiction.
In this example-driven session, we'll explore the Alexa Skills Kit (ASK) and see how to develop skills for Amazon's Alexa. You'll learn how to use the ASK CLI to jumpstart skill development and how to create conversational applications in NodeJS.
There's nothing new or exciting about relational databases. We abstract them away with ORMS, grudgingly write a query here or there, but generally try to forget about them entirely. Then the performance and scalability problems begin. “Shading, the secret ingredient to the web-scale sauce” often won't help us.
The database is at the heart of nearly every system we build. Reading data and writing data account for the majority of performance bottlenecks. When it comes to SQL and relational databases, the syntax is easy, but the concepts often aren't. The most important knowledge is not obvious but it is necessary to make the right design, query, and optimization decisions.
Indexing, a glimpse under the hood of the storage engine and the query optimizer, and some best practices are all you need to know bring your DB skills head and shoulders above your peers and ready to build bigger, better, faster apps.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several realworld patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
Great leaders inspire, excite, and empower those in their teams. These leaders help create a team that is more than the sum of it's parts; in short, a great leader can be a force multiplier for the team.
But what makes these force multipliers? Is it simply raw talent? Charisma? How are these leaders different from the bad leaders who become bottlenecks and roadblocks?
In this session, we explore the answer to that question and identify the skills and principles that create force multipliers. Put these skills into action and you can be one too!
If you’ve spent any amount of time in the software field, you’ve undoubtably found yourself in a (potentially heated) discussion about the merits of one technology, language or framework versus another. And while you may have enjoyed the technical debate, as software professionals, we owe it to our customers (as well as our future selves) to make good decisions when it comes to picking one technology over another.
In this talk, I will explore what criteria we should consider when comparing technologies, how we can avoid burning platforms as well as what to do when we’ve reached a dead end. We will also apply these techniques to a current technology or two.
In this example-driven presentation, we'll look at Spring Data REST, an extension to Spring Data that exposes your data repositories as a RESTful API, complete with hypermedia links. We'll start with essential Spring Data REST, but then go beyond the basics to see how to customize the resulting API to be more than just CRUD operations over HTTP.
Spring Data is a brilliant extension to the Spring Framework that makes simple work of exposing a database–any kind of database–via repositories. But as is often the case, your application's data doesn't usually stay within the application. It is consumed by external applications or from a Javascript client in the web browser. That means, we'll need to build a RESTful API around those repositories.
In this presentation, we will discuss Kafka Connect. Kafka Connect is an opensource project from Confluent. Kafka Connect provides us a way to move data from a data store as a source and stream or batch that information into Kafka. Kafka Connect also gives us a way to take information from Kafka and send that to another data store, a Sink. Every source and sink can be connected to and from various databases and message queues.
What this presentation will entail:
At the end of this presentation, we will have a live demonstration of watching a data pipeline using data stores.
In this session, we'll explore the Spring Boot Actuator, a runtime component of Spring Boot that lets you peer inside a running application and, in some cases, even tweak configuration on the fly. We'll look at many of the Actuator's endpoints, learn how to customize and even create new endpoints, and see how to expose Actuator metrics to several popular instrumentation and monitoring systems.
Spring Boot makes developing applications with Spring easy work by offering auto-configuration for many common application scenarios. And with Spring Boot's starter dependencies, even an application's build file can be easily managed. But Spring Boot's powers don't end when the application is deployed. That's where the real fun begins.
If you are interested in a different approach to writing your next micro-service, or are knee deep in the DevOps world with Kubernetes and Docker (both written using Go) you need to know go.
Come join me in a rather quick introduction to the language and it's merits and short-comings.
Micro-services, DevOps, command-line utilities — Go has been the catalyst in a quiet revolution happening right under our noses. Go, from Google, aims to be a language that is simple, with the aim of writing scalable and reliable software. Go brings a unique tilt to many aspects of language design, including enforcing a strict project structure, powerful tooling to support things like code-style enforcement, as well as “goroutines” to allow for concurrency.
An integral part to any DevOps effort involves automation. No longer do we wish to manage tens, hundreds or even thousands of servers by hand, even if that were possible. What we need is a programmatic way to create and configure servers, be those for local development, all the way to production.
This is where tools like Ansible come into play. Ansible offers us a way to define what our server configurations are to look like using plain-text, version-controlled configuration files.
Not only does this help with avoiding “snow-flakes”, but it promotes server configuration to participate in the SDLC, pulling server configuration closer to the developers.
In this session we will explore what Ansible has to offer, decipher the Ansible terminology, and run some examples to configure a local server.
Ansible, like Git, aims to be a simple tool.
The benefit here is that the level of abstraction that Ansible offers is paper-thin, with no complicated workflows, or opinions enforced by the tool itself.
The downside is that without a prescribed approach to Ansible, developing your playbooks often becomes a case of trial-and-error.
As engineers steeped in the DevOps mindset we must be able to use the tool effectively, allowing us to accelerate and shorten the lead time from development to production.
In this session we will take a look at some lessons learned when working with Ansible. Topics covered:
Graal is a VM and an awesome VM at that. Able to run a variety of languages and fast. The execution times can be impressive too. This VM can run anything, JavaScript, Python 3, Ruby, R, JVM-based languages like Java, Scala, Kotlin, and LLVM-based languages such as C and C++.
We are living in truly exciting times. So much interesting technology including the VM space. Graal is a virtual machine and shared memory system for multiple languages. GraalVM can either run standalone or embedded in OpenJDK or node.js. Graal can even embed inside databases such as MySQL or Oracle. In the presentation, we look at this exciting VM, how to start it, how to run polyglot applications, and how to integrate all within the same VM.
One of the biggest impediments to overall developer productivity and the overall success of the software organization is inefficient processes. Without the right tooling to get to the root of the problem, debugging build and test failures is incredibly frustrating and leads to delays in shipping software.
In this workshop, you’ll work through examples using Maven, Gradle, and Gradle Enterprise on our real data and that of some popular open source projects. You'll learn how to measure build speed and reliability, which metrics are important, how to apply these analyses to your own builds, and how to use build caching to make those builds dramatically faster enabling your team to achieve better developer productivity.
Microservices have helped us break apart back end services, but large front ends often remain problematic monoliths.
In this session you’ll learn how to apply the same concepts to large front-end applications, slicing them into end-to-end verticals. These verticals can then be owned by different teams and even written in different frameworks. Can Angular, React, and Vue all live together in harmony? How about AngularJS and Angular2+? With micro frontends, the answer is yes!
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
For those still grappling with Generics. This will be an attempt to clear the air about generics. What are wildcards? What is extends
? What is super
? What is covariance? What is contravariance? What is invariance? What is erasure? Why and when do I need this?
Generics or parameterized type is one of the more pain items in any statically typed language on the JVM. This presentation is set to overcome some of these hurdles and understand some of these confusing terms. We will cover the following:
This open source machine learning framework from Google has taken off. Come learn what you can do with it in your own organization.
TensorFlow is a powerful data flow-oriented machine learning framework developed by Google's Brain Team. It was designed to be easy to use and widely applicable on both numeric, neural network-oriented problems as well as other domains. We'll cover the over view as well as apply it to several fun, realistic problems.