Everyone (including your humble speaker) talks about Modularization. For once, let's focus on other interesting and exciting parts of Java 9 that does not involve modularization. In this presentation we will talk about key improvements and additions to the Java language and the JDK introduced in Java 9.
We will focus on first things that can be done better in Java 9 compared to previous versions on Java.
Then we will look at things we simply could not do before but we can benefit from now.
And finally, we will not talk about Modularization, well I lied, will discuss that just a bit.
Reactive Programming is no longer an esoteric idea. Many libraries and tools have started providing excellent support for this idea of programming. So much that Java 9 has embraced this idea by including it into the specification. In this presentation we will take a dive into what reactive programming looks like from the Java specification point of view.
Reactive Programming
What problem does it solve
How to use it from Java point of view?
What can you do to make good use of it.
After years of stagnation, Java is one of the most vibrant languages of current times. Java 8 saw several improvements and Java 9 a few more. Well, that's only the beginning. Come to this talk to learn about where Java is heading, what's brewing in the language and the ecosystem, well beyond Java 9.
We will look ahead into the next versions of Java, what you can expect, and how you can get ready.
React is a very targeted library for creating rich front end. While React does fairly well what it intends to provide, developing a practical application requires more than using React alone.
In this presentation we will build the front end for a small application that pulls data from a backend and along the way learn how to use React and related libraries to achieve the desired results.
In this presentation, we will dig deeper into creating applications with React.
We will take a deeper look into creating components, dealing with life cycle events, managing state, when and how to use Redux, and how to approach automated testing of the front-end code you create.
JavaScript is no longer that scary nasty language. There are so many wonderful features in the modern versions of this language that I can't stop praising it. In this presentation we will focus on some of the most exciting features of the language that make programming in JavaScript a pleasant experience.
We will dive into beautiful parts and features from ES 6, ES 7, and ES 8.
On the 2017 tour, I introduced the notion of “serverless” and Functions as a Service (FaaS) platforms. We understood the motivation for serverless computing, compared serverless to other cloud-native infrastructure approaches, navigated some architectural tradeoffs, and took a whirlwind tour of the Big 3 FaaS providers.
In this 2018 edition of the talk, we’ll still cover a few of the same themes to bring new folks up to speed, but we’ll also look at what’s changed in this ecosystem over the past year, take a look at new or enhanced features, offerings, runtimes, and programming models, and examine what use cases are becoming popular for serverless computing. We’ll also look at how tradeoffs have evolved, and definitely throw in a few demos.
In this presentation, we'll build, test, and deploy an image-processing pipeline using Amazon Web Services such as Lambda, API Gateway, Step Functions, DynamoDB, and Rekognition.
We'll take a look at some of the following topics:
Chaos Engineering, pioneered by Netflix, is the discipline of experimenting on a distributed system in order to build confidence in the system's capability to withstand turbulent conditions in production.
In this presentation, we'll take a look at the problem of building resilient software, and discuss how applying Google's SRE principles and patterns for architectural resiliency can help us to solve it. We'll then examine how the practice of Chaos Engineering can help us to prove or disprove the resiliency of our systems.
Apache Spark is the fast data processing of large document stores and databases. Spark is highly distributed, optimized, and redundant for large clustering manipulation and aggregation.
This talk is an introduction to Apache Spark, it's architecture, and it's programming API. We start with an introduction to DataFrames, the Catalyst Optimizer, and Spark SQL. We will then venture onto DataSets, discuss the DataSet API and the functional programming aspects of it. We will touch lightly on RDD and the pros and cons of using the API. We will then finish with how to connect to data sources like HDFS, S3, Cassandra, Elastic Search, and Kafka. This presentation will have samples that you can try out at home or at the office.
Spark Streaming is one of the few additions that are available with Spark that uses its internal architecture and creates a Streaming processing framework to process data in real time.
In this presentation, we will start with a small reintroduction to Spark and it's architecture and what it does. Then we delve into streaming, what purpose does it serve, how to set up Spark Streaming and how to use it. We will discuss how to set it with time, how the internals work, and how also to integrate it with Kafka. We then will talk about some of the more high-end features like checkpointing, and windowing.
Kafka has captured mindshare in the data records streaming market, and in this presentation, we knock on its door and see what lies behind. What is the draw? What makes it an attractive addition? How does it compare to Message Queues and other message streaming services?
We will do a thorough introduction into what is Kafka. We will also discuss Consumers, Producers, Streams. Integration with ZooKeeper, and discuss the performance aspect of using Kafka.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single metaanswer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several realworld patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
In the container orchestration space, one of the top contenders is Kubernetes (K8S). This session will go into detail of each component in Kubernetes along with how to use it. Anyone attending this session should be able to easy get stated with K8S and have an understanding of what they would need to do to their application to enable it to be K8S friendly.
This session will dive into different applications (stateful/stateless, data-focused/micro-service focused) and how they are managed with the different components of kubernetes:
There is no question that Docker has the attention of the majority of developers. It is clearly easy and better to isolate applications and their dependencies in the docker image and container runtime. Less easy and obvious is how to manage these container in a production environment. This session will be lead by a speaker that has been running docker in production since pre-1.0 days and works with dozens of companies that do the same.
This session is broken into 2 parts. The 1st part is focus on the different options and approaches for running docker in a production environment. This includes running docker from statically partitioned nodes managed by puppet or chef.. to running Kubernetes and DC/OS in a more modern Datacenter.
The 2nd part is focused on lessons learned from running docker including:
There are lots of questions on how to get Docker applications into a production environment. This is the session to discussion the options along with pros and cons.
This session will start with the discussion of why we need a container orchestrator and will dive into and compare the top contenders which include:
Spock is a groovy based testing framework that leverages all the “best practices” of the last several years taking advantage of many of the development experience of the industry. So combine Junit, BDD, RSpec, Groovy and Vulcans… and you get Spock!
There are 3 tools I use on every Java project I control… this is one of them and with good reason.
This session assumes some understanding of testing and junit and builds on it. We will introduce and dig deep into Spock as a test specification and mocking tool. Topics include:
Unit testing
Data driven tests
Mocking and Stubbing
Partial Mocks
Spock Extensions
Serialization is important for anything Big Data. We need to send information over the wire and we need to do so efficiently. This core concept presentation covers various serialization techniques and libraries. That way you can use Akka, Kafka, Spark, and various MQs efficiently
For this presentation, we will focus on three topics: Core Java Serialization, Apache Avro, and Google's Protocol Buffers. We will review serialization and deserialization ease of use, review performance, and how to integrate with Akka, Kafka, Spark.
An evolutionary architecture supports incremental, guided change along multiple dimensions.
For many years, software architecture was described as the “parts that are hard to change later”. But then microservices showed that if architects build evolvability into the architecture, change becomes easier. This talk, based on my upcoming book, investigates the family of software architectures that support evolutionary change, along with how to build evolvable systems. Understanding how to evolve architecture requires understanding how architectural dimensions interact; I describe how to achieve appropriate coupling between components and services. Incremental change is critical for the mechanics of evolution; I cover how to build engineering and DevOps practices to support continuous change. Uncontrolled evolution leads to undesirable side effects; I cover how fitness functions build protective, testable scaffolding around critical parts to guide the architecture as it evolves.
The software development ecosystem exists in a state of dynamic equilibrium, where any new tool, framework, or technique leads to disruption and the establishment of a new equilibrium. Predictability is impossible when the foundation architects plan against changes constantly in unexpected ways. Instead, prefer evolvability over predictability. This keynote illustrates how to achieve evolutionary architectures and how to retrofit existing systems to support better evolution.
Building Evolutionary Architectures requires identifying and creating architectural fitness functions. This hands-on workshop defines fitness functions and provides group exercises to help identify and discover them.
According to the Building Evolutionary Architectures book, an architectural fitness function provides an objective integrity assessment of some architectural characteristic(s). This hands-on workshop provides examples of fitness functions and group exercises to identify, define, and implement a variety of fitness functions: atomic, holistic, continuous, triggered, temporal, and others.
Stories and lessons from architecture, design, process, and other sources, each illustrating important principles and pitfalls for modern architects.
Those who cannot remember the past are condemned to repeat it. –George Santayana
The past is never dead. It's not even past. –William Faulkner
Most developers pursue the Latest and Greatest with intense fervor, yet the history of engineering, including software projects, contains rich lessons that we risk repeating ad nauseam. This session recounts a variety of stories of projects that failed architecturally…and why. Ranging from the Vasa in 1628 to Knight Capital in 2012, each story tells of a mistaken interpretation of some architectural fundamental principle and the consequences–some good, some less so. I I also look at the common threads for these stories, which resonates with problems many companies have but don't realize.
null
How do you create creativity? This talk offers techniques and perspectives to discover, grow, and project your ideas.
Where do ideas for new talks, books, videos, software…ultimately, everything…come from? A common question at conference panels to the speakers is “Where do you get your ideas for talks?” This session answers that question, along with how some of us cultivate new ideas. This talk investigates three aspects of ideation:
— How do new ideas arise?<br>
I cover various ways of synthesizing new ideas: switching axiom(s), mutation, oblique strategies, and a host of other techniques to generate a germ of a new idea
— How do ideas grow into intellectual property?<br>
I cover techniques for iterating on ideas to discover deeper meanings and connections. I also cover how techniques to evolve and grow ideas.
— How do you communicate new IP?<br>
I cover various ways to convey IP: presentations, articles, books, videos, and a host of other media. I talk about how writing and presenting techniques to amplify your new idea and get it out into the world.
One key to building new IP is separating ideation, organization, and realization, which often become jumbled. By separating them, we can build practices to allow each to flourish. This talk provide concrete advice in each area to help realize new ideas.
Creativity is the reason we have all the Cool Stuff we have. This session investigates how to fan the spark of an idea into a roaring flame of intellectual erudition.
This session covers the landscape of Big Data tools, technologies and best practices in 2018. You'll leave this session armed with the knowledge you need to build Big Data solutions by assembling the best technologies for you.
We cover the components of a big data pipeline, options available for each module and the pros, cons and best practices for each option.
Big Data is exploding into our industry and these skills are increasingly in demand. Much of our existing body of knowledge needs to be updated, however as Big Data solutions present a number of unique challenges. In this half-day workshop we start with the question “What is Big Data?” and “How do big data solutions differ from traditional data architectures.” We then dive into core Hadoop technologies to tackle your Big Data challenges and dive into several tools and technologies for distributed processing of large datasets.
The Big Data Space is rapidly changing and with a seemingly dizzying number of technology choices, one might not know where to begin. Our goal with this half-day workshop is to give you an overview of a number of technologies, their respective strengths and weaknesses, and enough knowledge to make informed decisions and built great tech!
Big Data is exploding into our industry and these skills are increasingly in demand. Much of our existing body of knowledge needs to be updated, however as Big Data solutions present a number of unique challenges. In this half-day workshop we start with the question “What is Big Data?” and “How do big data solutions differ from traditional data architectures.” We then dive into core Hadoop technologies to tackle your Big Data challenges and dive into several tools and technologies for distributed processing of large datasets.
The Big Data Space is rapidly changing and with a seemingly dizzying number of technology choices, one might not know where to begin. Our goal with this half-day workshop is to give you an overview of a number of technologies, their respective strengths and weaknesses, and enough knowledge to make informed decisions and built great tech!
An integral part to any DevOps effort involves automation. No longer do we wish to manage tens, hundreds or even thousands of servers by hand, even if that were possible. What we need is a programmatic way to create and configure servers, be those for local development, all the way to production.
This is where tools like Ansible come into play. Ansible offers us a way to define what our server configurations are to look like using plain-text, version-controlled configuration files.
Not only does this help with avoiding “snow-flakes”, but it promotes server configuration to participate in the SDLC, pulling server configuration closer to the developers.
In this session we will explore what Ansible has to offer, decipher the Ansible terminology, and run some examples to configure a local server.
Ansible, like Git, aims to be a simple tool.
The benefit here is that the level of abstraction that Ansible offers is paper-thin, with no complicated workflows, or opinions enforced by the tool itself.
The downside is that without a prescribed approach to Ansible, developing your playbooks often becomes a case of trial-and-error.
As engineers steeped in the DevOps mindset we must be able to use the tool effectively, allowing us to accelerate and shorten the lead time from development to production.
In this session we will take a look at some lessons learned when working with Ansible. Topics covered:
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
We live, and operate in a world where our services do not need to be available 24/7, but also respond to surges in demand, and scale down when demands are less.
Reactive systems, and reactive architectures have arisen to address this very concern.
Vert.x is a ployglot toolkit that makes writing reactive applications on the JVM possible.
With a non-blocking, event driven architecture, Vert.x can help you scale your application, leveraging the multi-threaded nature of the JVM, and the multi-core abilities of your hardware.
In this session we will take a look at Vert.x, it's programming model and architecture.
We will write some code, and explore the ecosystem around Vert.x, as well as discuss some of the gotchas that you might come across as you develop Vert.x applications.
The late Stephen Covey, world-renowned leadership author and speaker, said many times that “You don’t break principles. You only break yourself against them.” Like any other discipline, software engineering is governed by its own set of unbreakable principles. Those who discover and align their behavior with them often seem to be good at this business we call software, and those who ignore or willfully violate them inevitably fail, sometimes in spectacular ways.
Our industry has existed for several decades. One could logically surmise that we should have discovered and aligned around at least a few of these principles by now. And yet the trained eye of a historian, focused on the events of our industry’s last two decades, would observe a troubling cycle of discovery leading to failure, leading to rediscovery and further failure. What is the nature of this cycle? Why are we trapped? How do we escape? We’ll confront these questions and more in this tragicomedy that has something for everyone, from the novice script kiddie to the seasoned blowhard jamboree.
If you’ve spent any amount of time in the software field, you’ve undoubtably found yourself in a (potentially heated) discussion about the merits of one technology, language or framework versus another. And while you may have enjoyed the technical debate, as software professionals, we owe it to our customers (as well as our future selves) to make good decisions when it comes to picking one technology over another.
In this talk, I will explore what criteria we should consider when comparing technologies, how we can avoid burning platforms as well as what to do when we’ve reached a dead end. We will also apply these techniques to a current technology or two.
We developers really like code.
Code, being plain-text, can be version-controlled, versioned, and follow a traditional SDLC lifecycle.
For the longest time however, we were forced to live with having most of our Ci/Cd and server configurations live outside of our codebases, often at the mercy of infrastructure/operations teams.
With the evolution of DevOps comes the notions of constructs like IaaC (Infrastructure-As-A-Code), and with Jenkins 2.0, we can now manage our Jenkins jobs configurations as code!
In this session we will explore the concept of “Pipelines-As-A-Code”, including the DSL that Jenkins offers, and how we can use this to configure Jenkins jobs via simple, version-controlled Jenkins files. We will see how we can create Jenkins jobs by autodiscovering repositories, as well as when we branch our code to create releases.
In this example-driven presentation, you'll learn how to leverage Spring Boot to accelerate application development, enabling you to focus coding on logic that drives application requirements with little concern for code that satisfies Spring's needs.
For over a decade, Spring has sought to make enterprise Java development easier. It began by offering a lighter alternative to EJBs, but continued to to address things such as security, working with various sorts of databases, cloud-native applications, and reactive programming. And, along the way, Spring even took steps to make itself easier to use, offering Java-based and automatic component configuration. Even so, there's still a lot of near-boilerplate code required to develop Spring applications.
Enter Spring Boot. Spring Boot's primary purpose is to make Spring easier to work with. It achieves this in three ways:
All together, Spring Boot lets you focus on fulfilling your application's requirements without worrying about writing code that satisfies the needs of a framework.
In this session, you'll learn how to take your Spring Boot skills to the next level, applying the latest features of Spring Boot. Topics may include Spring Boot DevTools, configuration properties and profiles, customizing the Actuator, and crafting your own starters and auto-configuration.
TBD
Serverless is a big topic these days. More specifically, Function as a Service (FaaS) enables small-scale functions deployed in the cloud to be enabled only as needed. Compared to Platform as a Service (PaaS) services which are “always on” and require explicit scaling, FaaS services wake up when needed and are scaled by the platform automatically.
Imagine a service that only gets invoked one per hour. In PaaS, that service is always running and ready to handle requests, consuming processor resources and ultimately costing more to be ready at a moment's notice. In contrast, FaaS services awaken quickly upon invocation and go away when finished, conserving resources and cost.
In this example-driven session, we'll see how to create and deploy Function as a Service (FaaS) services with Spring and Spring Boot.
In this session, we'll explore the new reactive features in Spring 5 to build reactive, non-blocking applications using Spring's familiar programming model.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 has introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
Development teams often focus on getting code to production losing site of what comes after the design and build phase. But we must consider the full life cycle of our systems from inception to deployment through to sunset, a discipline many companies refer to as site reliability engineering.
While your organization may or may not have an SRE team, you have someone playing that role and we can all benefit from looking at the principles and practices that we can bring to bear on our projects. In this talk, I will introduce the concepts of SRE and how you can adopt these ideas on your applications.
By now I bet your company has hundreds, maybe thousands of services, heck you might even consider some of them micro is stature! And while many organizations have plowed headlong down this particular architectural path, your spidey sense might be tingling…how do we keep this ecosystem healthy?
In this talk, I will go beyond the buzzwords into the nitty gritty of actually succeeding with a service based architecture. We will cover the principles and practices that will make sure your systems are stable and resilient while allowing you to get a decent night's sleep!
Back in the day, it used to be so simple. Our projects had a main.js file that contained a few hundred lines and every so often the corporate communication department would ship out some new CSS files. But now things are not quite so easy. With more and more single page apps containing thousands or hundreds of thousands of lines of JavaScript, we're going to need a bigger boat.
In this talk I will explore various options you can deploy on your projects to tame the mass of code that lives on the front end of our applications. From NPM to Gulp to Webpack, this talk will help you establish a front end pipeline.
Gradle is the build tool of choice in the open source world, and rapidly becoming the standard in industry as well. Anyone who works with Gradle on a Java project knows the basics of the Java plugin and how to write simple tasks in Groovy. Gradle can do much more, however. This talk will demonstrate how to write your own custom task classes and how to create Gradle plugins from them. Other Gradle features will be demonstrated as well, including file manipulation, incremental builds, generating the Grade wrapper, and resolving conflicts in dependencies.
Gradle Inc also provides a free build scan capability to analyze build files. This too will be demonstrated, as well as profiling your build, determining dependencies, and more.
Good discussions are supposed to diverge from their intended path. Free association is a feature, not a bug, and helps you see new connections between ideas. Without structure, however, it can be difficult to add context to new ideas and understand how they relate to more immediate problems. This talk discusses the technique of mental bookmarks – how to remember where you were when a discussion diverged. In addition to giving you a reputation for having an amazing memory, the skill also helps with personal awareness in general.
To give the technique context, we'll look at the fractal nature of success – the way we tend to see our current environment in relative terms, always comparing ourselves to those slightly more successful and slightly less successful.
Containers and Micro-service are like peanut butter and jelly, they just go together. However do you know how resources are managed in a container and how that affects your application. This session will dive into how Java is affected living under control groups.
This session will review containers, the underlying technology to containers along with different container runtimes such as Docker. We will look at the anatomy of a container including control groups (cgroups) and namespaces. This session will conclude with the challenges of running the JRE in a container and the impact that has on the efficiency of the process at scale.
This session will include container friendly changes to Java 9 around memory and ergonomics. It will also provide advice on how to get consistent CPU share.
It happens to us all; there are simply days where it seems impossible to get anything done. This session focuses on techniques and tips to get into the zone, stay in the zone and to protect your productivity, even in disruptive environments.
Rather than focusing on any one productivity methodology (e.g. GTD) This talk analyzes the internal and external factors that affect our productivity and offers broader strategies to get back on track.
JavaScript is one of the most powerful and yet perilous languages. It is arguably the most ubiquitous language, the king of the client side programming the web. The dynamic nature of the language offers many benefits, but the weak typing and lack of tooling can make changes unpredictable, that is, if we're not careful. Automated testing can greatly alleviate the pains and lead to more fruitful development.
In this presentation we will take look at the tools, but also the approach to automate the tests for JavaScript code. We will start with simple cases at first, but quickly dive into dealing with dependencies, and measuring code coverage.
Monoliths are out and microservices are in. Not so fast. Many of the benefits attributed uniquely to microservices are actually a byproduct of other architectural paradigms with modularity at their core. In this session, we’ll look at several of the benefits we expect from today’s architectures and explore these benefits in the context of various modern architectural paradigms. We’ll also examine different technologies that are applying these principles to build the platforms and frameworks we will use going forward.
Along the way, we’ll explore how to refactor a monolithic application using specific modularity patterns and illustrate how an underlying set of principles span several architectural paradigms. The result is an unparalleled degree of architectural agility to move between different architectural paradigms.
Big architecture up front is not sustainable in today's technology climate where expectations are high for delivering high quality software more quickly than ever before. To accept change, teams are moving to agile methods. But agile methods provide little architectural guidance. Attempts to define the architectural vision for a system early in the development lifecycle does not work. In this session, we provide practical guidance for software architecture for agile projects.
We will explore several principles that help us create more flexible and adaptable software systems. We’ll expose the true essence of what’s meant when we say “architectural agility.” And we’ll explore the real goal of software architecture and how we can accommodate architectural change to help increase architectural agility.
Microservice architecture is a modern architectural approach that increases development and delivery agility by focusing on building modular services. The framework we use has a tremendous impact on how quickly and easily we can deliver servcies. New frameworks are emerging that embrace new approaches for helping us deliver microservices.
In this session, we will explore some modern Java frameworks for building microservices (aka micro frameworks). Example frameworks you may see include Dropwizard, Ratpack, Spark, Ninja, RestExpress, Play, Restlet, and RestX. We'll demonstrate each framework by using a programming kata to build the same service using several different frameworks. Optionally, bring your own laptop, clone the github repo, and you can build the services along with me. To do this, you must have Java 8+ and Gradle.
The way we build and deliver software is changing. We must deliver software more quickly than ever before and traditional approaches to software architecture, infrastructure and methodology do not allow us to meet demand. We’ve reached the limits of agility through process improvement alone, and further increases demand we focus on improving architecture, infrastructure, and methodology simultaneously. 12 Factor is an app development methodology for building modern apps in the modern era.
Building modern apps requires modern methods and 12 Factor is an app development methodology that helps development teams build software by emphasizing development practices that meld together modern architectural paradigms with agile practices like continuous delivery for deployment to cloud platforms. In this session, we’ll examine the 12 Factors and explore how to apply them to apps built using Java.
Java 9 with the Jigsaw module system is here. In this session, we'll explore the basics of the Jigsaw module system and examine the impact it will have on how we build Java applications. We will dig into it's major features, including dependency management and Jigsaw services. Once we understand Jigsaw's basics, we will explore what it's going to take to migrate existing Java application to Java 9 and leverage Jigsaw.
Jigsaw's impact stands to be consequential. Jigsaw will restrict application code from accessing non-published JDK classes (ie. sun.com), require you to be explicit in declaring your dependencies, and more. We will explore Jigsaw basics and then dig into the impact Jigsaw will have on migrating existing Java applications to Java 9.
Kotlin is a practical language designed for the JVM. It focuses on pragmatism and safety, with an emphasis on interoperability and tool support. Kotlin is statically typed and includes null checks in the type system itself. The language runs anywhere Java does, from microservices to Android apps. This workshop will show how to use Kotlin to simplify anything you originally planned to do in Java.
Kotlin combines object-oriented programming with functional features like lambda expressions, functional types, and data classes. Since it comes from JetBrains, the company that produces the IntelliJ IDEA development environment, its tool support is excellent. IntelliJ even provides a Java-to-Kotlin converter to help you get started.
This presentation will cover the basic types, defining and calling functions, programming with lambdas, higher order functions, and DSLs. The goal is to give you enough background to be productive while you continue to learn and grow with the language.
This is a revised and updated version of the previous talk, with current thinking from practice and the literature. The talk presents why conflicts with your manager are inevitable based on differences in priorities and perspectives, and how to plan for them. The goal is to show you how to build the loyalty relationship that allows you to get what you need when you need it.
Topics covered will include diagnosing communication styles, lessons from game theory, working within the organizational hierarchy, and lessons on how to build a relationship with your manager that still allows you the freedom to express yourself and what you really want.
Target is a US-based retailer with nearly 2,000 physical stores and each of these stores runs a microservice footprint to support operations including point-of-sale and IoT processing.
This talk will discuss how Target leverages Kubernetes as an infrastructure platform at the edge, and how they've enabled continuous delivery to the store “mini-clouds”.
Organizations that embrace microservice architecture learn quickly they need a host of tooling to support continuous delivery, configuration management, and service discovery. The collective discipline of building and maintaining these tools is known as Platform Engineering.
This talk will discuss the many facets that comprise platform engineering, as well as covering a host of open source tools available to support the task.
Rich Hickey once said programmers know the benefits of everything and the trade offs of nothing…an approach that can lead a project down a path of frustrated developers and unhappy customers. As architects though, we must consider the trade offs of every new library, language, pattern or approach and quickly make decisions often with incomplete information. How should we think about the inevitable technology choices we have to make on a project? How do we balance competing agendas? How do we keep our team happy and excited without chasing every new thing that someone finds on the inner webs?
As architects it is our responsibility to effectively guide our teams on the technology journey. In this talk I will outline the importance of trade offs, how we can analyze new technologies and how we can effectively capture the inevitable architectural decisions we will make. I will also explore the value of fitness functions as a way of ensuring the decisions we make are actually reflected in the code base.