Current approaches to software architecture do not work. As we challenge some of the sacred truths of software development (reuse, failure prevention), we examine how current approaches to software architecture must also change.
Software systems evolve but current approaches to architecture do not factor in this inevitable evolution. Attempts to define the architectural vision for a system early in the development lifecycle does not work. Big architecture up front (BAUF) does not work. To compound the challenge, agile methods offer very little guidance on how to effectively do software architecture.
In this session, we examine several actionable principles that will help you create software architectures that possess the ability to continuously evolve.
Modularity is the common aspect of modern architectures and platforms. Understanding the role of modularity when making architecture decisions is critical.
The architecture paradigms we’re using are changing. The platforms we deploy our software to are changing. We are confronted with several new architecture paradigms to choose from, such as microservices and miniservices. Yet should we automatically discard some of the proven architectures we’ve used in the past, including more traditional web services? Likewise, new platforms, such as cloud, complicate the decision. Yet, at the heart of this transformation is modularity.
In this session, we’ll explore how modularity is impacting the platforms we are leveraging and the architecture paradigms we’ll use and offer a clear roadmap with proven guidance on navigating the architecture decisions we must make.
The Java Platform Module System was available with Java 9. In this session, we provide a clear framework for migrating your applications to JPMS.
With Java 9, modularity became a first class construct on the Java platform…Finally! In this session, we explore the default module system and examine how to migrate applications. We'll start by examining the first step in the migration and then examine several strategies for migrating your application.
Finally, we will explore advanced concept of JPMS that bring greater structural integrity and encapsulation to the Java platform.
Organizations have a lot of expertise in Java EE. With MicroProfile, developers can leverage this expertise to build cloud-native applications.
Few consider Java EE as a viable option for building microservices. Yet developers have a wealth of knowledge and skill that they may want to leverage to build microservices as they adopt cloud-native architecture patterns. The MicroProfile is a baseline platform definition that optimizes Enterprise Java for a microservices architecture and delivers application portability across multiple MicroProfile runtimes. In this session, we will explore the MicroProfile and examine it’s viability for using Java EE to build cloud-native applications.
The way we build and deliver software is changing. We must deliver software more quickly than ever before and traditional approaches to software architecture, infrastructure and methodology do not allow us to meet demand. We’ve reached the limits of agility through process improvement alone, and further increases demand we focus on improving architecture, infrastructure, and methodology simultaneously. 12 Factor is an app development methodology for building modern apps in the modern era.
Building modern apps requires modern methods and 12 Factor is an app development methodology that helps development teams build software by emphasizing development practices that meld together modern architectural paradigms with agile practices like continuous delivery for deployment to cloud platforms. In this session, we’ll examine the 12 Factors and explore how to apply them to apps built using Java.
Java is a language in evolution. There are a handful of language changes in Java 9 and 10 plus several JDK changes in 9, 10, 11, and 12. Some of these changes are significant in that they allow us to do things more effectively than before. The difference can be anywhere from reducing code to avoiding errors that come from verbosity. In this presentation we will explore the language changes first. Then we will visit the additions to the JDK. Along the way we will also look at a few things that have been removed from Java as well.
.
We will program with Java quite differently in the future than we do today. The reason is that Java is embracing asynchronous programming like never before. This will have a huge impact on how we create services and web applications. In this presentations we will look at what asynchronous programming is, what continuations are, how they get implemented under the hood, and how we can benefit from them.
.
Based on the book with the same title as this talk, this presentation will explore the modern features of JavaScript that make this a pleasant language to use. Come to this presentation to get a quick jumpstart on the elegant capabilities from passing arguments, to creating iterators, to destructring, and working with classes using ES 6 and beyond.
.
JavaScript modules are highly powerful but an be confusing with the number of options available to explore and import. In this presentation we will take a through look at what modules are, how to export from a module, how to import, and the dos and don'ts. This will help you get comfortable with modules as you develop applications with JavaScript libraries/frameworks, like Angular, React, Vue, etc.
.
The ideas of reactive systems and reactive programming has been around for a while. However, changes in many areas including how applications are deployed to how applications are used, including big data, have resulted in a renewed interest in this area.
In this presentation we will start with a discussion of the nature of reactive systems and the characteristics of reactive applications. Then we will dive into design and architectural concerns we have to address to effectively create such system that can meet the demands of high volume, high frequency data and interactions.
A lot of software development seems to be concerned with finding answers; troubleshooting issues; solving problems. But what if we’re not asking the right questions? Learning to ask the right questions in the various contexts in which we work is one of the key, yet underemphasized skills that any competent engineer should have!
In this session, we’ll examine several critical questions that you should keep in your toolbox. You can use these questions to evaluate tool and technology selection decisions, architecture strategy, data modeling decisions, and more. You should walk away with this session feeling equipped to engage in “adult conversations” about software engineering, where we cut through the hype and focus on the tradeoff decisions before us.
So what are the questions? Well, you’ll just have to come to the session to find out. But if you browse through my Twitter stream, you just might find a few.
As an architectural style, microservices are here to stay. They have crossed the proverbial chasm, and now it’s time to get to work. Microservices provide us with the ability to create truly evolutionary architectures composed of cohesive and autonomous components using well known and characterized distributed systems patterns.
As we create and compose components across the hard boundary of the network, we become deeply interested in establishing the correct boundaries and has resulted in renewed interest in system design and decomposition. Fortunately, the tried and true practices of Domain-Driven Design are available to us.
In this presentation, we will cover a distillation of strategic (bounded contexts, subdomains, context mapping) and tactical (aggregates, domain events) DDD techniques and demonstrate how they enable us to create effective event-driven microservices.
NOTE: THIS IS A DISCUSSION ORIENTED SESSION. THERE WILL BE MINIMAL LECTURE AND SLIDES.
According to Wikipedia, metaphysics is the branch of philosophy that studies the essence of a thing. This definition invites the question, “Does software design have an essence?” And if it does, would the discovery and understanding of this essence lead to a fundamental improvement in our ability to build well-designed software? Does design matter? What exactly is the design of software? Can we point to it? Or is it something immaterial?
These are the types of questions that I want to confront in this session. We’ll draw inspiration from various individuals, from Don Norman to Fred Brooks to Jack Reeves to Christopher Alexander. We’ll consider their explorations of design, both in and out of the software realm. But ultimately, we’re going to have a conversation inspired by the Dialogues of Plato, and each iteration of this session will arrive at its own conclusions.
Connascence is a term coined by Meilir Page-Jones in his article “Comparing Techniques by
Means of Encapsulation and Connascence.” It is a software quality metric that attempts to measure coupling between entities in three dimensions: strength, locality, and degree. Rather than describing architectural coupling as simply loose or tight, connascence gives us a tool to describe and measure the huge diversity in coupling that we encounter in systems. Relatedly, it also gives us a tool for encouraging high cohesion within modules.
Connascence can also be divided into static and dynamic categories, with a long list of specific types such as connascence of name, meaning, timing and identity.
Page-Jones gives us a rule to follow, “Eliminate any unnecessary connascence and then minimize connascence across encapsulation boundaries by maximizing connascence within encapsulation boundaries,” which he says can be applied to “any future design paradigm with partitioning, encapsulation and visibility rules.”
In this session, we’ll examine connascence as it appears within architectural patterns such as REST, message-driven collaboration, and microservices, as well as within various serialization techniques. You’ll walk away with tools to help you better describe and manage coupling within your system architectures.
You have some modular code with a REST API. You are on your way to Microservices. Next, you package it in a container image that others can run. Simple. Now what? Your service needs to log information, needs to scale and load balance between its clones. Your service needs environment and metadata way outside its context. What about where the service will run? Who starts it? What monitors its health? What about antifragility? Updates? Networking? Oh my.
Don't get flustered. We will explore how Kubernetes simplifies the complexity of distributed computing.
This session will help you understand the terms, architecture and the mechanics of the Kubernetes tools. You will understand how to target your applications to a seemingly complex distributed compute platform.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts (1 of 2)
Aha moments with apps in containers can be quite liberating. The mobile space is saturated with “there's an app for that”. For us, we now expect “there's a container for that”. “Write once, run anywhere” (WORA) has changed to “Package once, run anywhere” (PORA). The growing community of containers is riding up the hype curve. We will look at many ways to assemble pods using architecture patterns you already know.
Your software package delivery and installation is no longer an rpm, deb, dmg, jar, war, native executable or a run script, it is simply an image that has a common run container command.
During the presentation, we will explore some examples on Katacoda.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts
At the 2009 Agile conference, J.B.Rainsberger declared “Integration tests are a scam”. I agree. Come see some compelling reasons why consumer-driven contract testing is a much better approach. Particularly for microservices.
We will explore different testing techniques on Kubernetes, including an important one called “Consumer-Driven Contracts”.
After a brief overview of the concepts a live demonstration will show you how to:
Kubernetes out of the box is a strong platform for running and coordinating large collections of services, containers, and applications. As is, Kubernetes is powerful for many solutions.
Remember Underdog? He was a mild-mannered dog, but when stress and conflict were introduced to the plot he took a magic pill, he became a superhero. Istio is a superhero for Kubernetes.
Istio is an open, platform-independent service mesh that manages communications between services in a transparent way. With a distributed architecture many things can fail, slow down and become less secure. Istio provides solutions to those stresses in our plot toward this architecture style:
• Traffic management
• Observability
• Policy enforcement
• Service identity and security
We will explore these benefits with Istio through some working examples on Kubernetes. The key point is this meshing solution allows your Java code to be less coupled to the inherent weaknesses of a distributed platform.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
From operating system on bare metal, to virtual machines on hypervisors, to containers orchestration platforms. How we run our code and bundle our applications continues to evolve. Serverless computing continuous our evolutionary path for our architectures.
Kubernetes provides an ideal, vendor-agnostic platform for serverless computing. Serverless promises to developers that we can worry less about the cluster and focus more on their logic. Based on your Kubernetes knowledge we will discover the various contributors of serverless frameworks on Kubernetes. Specifically, we will unpack how two open source serverless frameworks, Kubeless and OpenFaaS, leverage Kubernetes to achieve their promises. We will explore how Knative is helping the serverless providers evolve to the next levels sophistication.
Graal is a VM and an awesome VM at that. Able to run a variety of languages and fast. The execution times can be impressive too. This VM can run anything, JavaScript, Python 3, Ruby, R, JVM-based languages like Java, Scala, Kotlin, and LLVM-based languages such as C and C++.
We are living in truly exciting times. So much interesting technology including the VM space. Graal is a virtual machine and shared memory system for multiple languages. GraalVM can either run standalone or embedded in OpenJDK or node.js. Graal can even embed inside databases such as MySQL or Oracle. In the presentation, we look at this exciting VM, how to start it, how to run polyglot applications, and how to integrate all within the same VM.
An overview of various popular streaming technologies on the JVM: Kafka Streams, Apache Storm, Spark Streaming, Apache Beam. Discuss “the bill of rights” of what to expect of all streaming libraries and frameworks, security, failover, exactly once processing.
Streaming is now an essential part of our life. We have cheaper drives, faster networks, and more memory. We can haul tons of data, but we need to process that data, manipulate and enrich. To do so we need some sort of streaming solution. Let's look at the most common ones and expose the differences and similarities between frameworks so you, the attendee, can make a better decision.
Kafka is more than just a messaging queue with storage. It goes beyond that and with technology from Confluent open source it has become a full-fledged data ETL and data streaming ecosystem.
When we utter the words, Kafka, it is no longer just one component but can be an entire data pipeline ecosystem to transform and enrich data from source to sink. It offers different ways to handle that data as well. In this presentation, we define:
We then discuss KSQLDB. A SQL layer built upon Kafka Streams that provides a simple query language to perform streaming operations
Akka Streams is the Akka reactive-streams.org based implementation that competes with other reactive streaming libraries like RXJava, and Project Reactor. What is clever is the ability to componentize sections of the stream and use it compose solutions.
Akka as an umbrella term for technologies ranging from Actors, Agents, Persistence, Streams, Akka HTTP and more. You can say it is a competitor to umbrella term of “Spring”. While “Spring” has Project Reactor, and Netflix has RXJava which are excellent products on their own. Akka Streams has a different take on the whole reactive streams. Partial Composition, functional abstractions, and DSL Graphs are all incorporated in this wonderful streaming API. We will also look at a fantastic package called Alpakka that integrates with many of the popular databases and message queues.
In this presentation, we will start with an introduction of the concepts, then move to the programming API. And while I will be using Scala and Java. A knowledge of Scala is not required and I will guide you along some of the Scala aspects.
For those still grappling with Generics. This will be an attempt to clear the air about generics. What are wildcards? What is extends
? What is super
? What is covariance? What is contravariance? What is invariance? What is erasure? Why and when do I need this?
Generics or parameterized type is one of the more pain items in any statically typed language on the JVM. This presentation is set to overcome some of these hurdles and understand some of these confusing terms. We will cover the following:
Some developers simply hate type inference. And then there others who love it. Neither one of them is entirely right. In Java we have been making extensive use of type inference for several years without realizing it. The introduction of “var” in Java 10 has stirred up some surprising debate. In this presentation we will step back and review type inference in Java. Then we will dive deep into type inference in Java 10 and 11. We will wrap up the presentation will good recommendations on when to use type inference and when to avoid it.
.
Kotlin is one of the few languages that compiles down to Java bytecode, runs on server-side, on mobile devices, and also transpiles to JavaScript. In this fast paced, example driven presentation we will explore the many beauties of Kotlin and understand the key capabilities of the language.
.
A number of developers and organizations are beginning to make use of Functional Programming in Java. With anything that's new, we often learn it the hard way. By stepping back and taking a look at programming style as idioms, we can quickly gravitate towards better coding style and also avoid some common traps that we often get drawn towards.
.
Java Modules are the future. However, our enterprise applications have legacy code, a lots of it. How in the world do we migrate from the old to the new? What are some of the challenges. In this presentation we will start with an introduction to modules and learn how to create them. Then we will dive into the differences between unnamed modules, automatic modules, and explicit modules. After that we will discuss some key limitations of modules, things that may surprise your developers if they're not aware of. Finally we will discuss how to migrate current applications to use modules.
.
Every organization has at least a phalanx or two in the “Cloud” and it is, understandably changing the way we architect our systems. But your application portfolio is full of “heritage” systems that hail from the time before everything was as a service. Not all of those applications will make it to the valley beyond, how do you grapple with your legacy portfolio? This talk will explore the strategies, tools and techniques you can apply as you evolve towards a cloud native future.
In this talk, you will learn:
These days, you can’t swing a dry erase marker without hitting someone talking about microservices. Developers are studying Eric Evan’s prescient book Domain Driven Design. Teams are refactoring monolithic apps, looking for bounded contexts and defining a ubiquitous language. And while there have been countless articles, videos, and talks to help you convert to microservices, few have spent any appreciable time asking if a given application should be a microservice. In this talk, I will show you a set of factors you can apply to help you decide if something deserves to be a microservice or not. We’ll also look at what we need to do to maintain a healthy micro(services)biome.
There are many good reasons to use a microservices architecture. But there are no free lunches. The positives of microservices come with added complexity. Teams should happily take on that complexity…provided the application in question benefits from the upside of microservices. This talk will cut through the hype to help you make the right choice for your unique situation.
If you’ve spent any amount of time in the software field, you’ve undoubtably found yourself in a (potentially heated) discussion about the merits of one technology, language or framework versus another. And while you may have enjoyed the technical debate, as software professionals, we owe it to our customers (as well as our future selves) to make good decisions when it comes to picking one technology over another.
In this talk, I will explore what criteria we should consider when comparing technologies, how we can avoid burning platforms as well as what to do when we’ve reached a dead end. We will also apply these techniques to a current technology or two.
Take a look at your codebase. Go ahead, this abstract will wait. Notice anything? Perhaps a few more lines of JavaScript than years past? JavaScript is no longer an outlier, a language for the interns, something we can just mash together. Today, JavaScript is a first class citizen. As such, we need to treat it will all the care and feeding we extend our server side languages. This talk will introduce you to a set of tools that will help you write bulletproof JavaScript.
Step one, make sure we aren't making any basic mistakes like using == when we really mean ===. To remedy these types of bugs, we'll leverage JSHint to statically analyze our code. In addition to walking through the setup, we'll discuss how to ratchet up the rules as you improve your codebase. Just like Java or C#, we also need to test our JavaScript code. We'll introduce Jasmine, a BDD style testing tool as well as other tools that make help in the testing process. Last but not least, we'll take a tour of Plato, a JavaScript source code visualizer. Taken together, these tools can go a long way to improve your JavaScript code.
If you have ever studied a martial art, chances are you are familiar with katas: the practice of individual training exercises. Repeatedly. It may seem pointless to practice the same move again and again, the only way to improve is repetition. We can apply the same concept to learning programming languages.
Working individually or in small groups, attendees will work through a set of problems using JavaScript to solve them. Whether you are new to JavaScript or an old hand, this session will give you an opportunity to hone your craft! Bring a laptop or be prepared to make a friend.
Vue is a new, progressive front end framework. At first glance, Vue might look like Yet Another Javascript Framework, it's time to take a look.
It is designed from the ground up to be incrementally adoptable, and can easily scale between a library and a framework depending on different use cases. It consists of an approachable core library that focuses on the view layer only, and an ecosystem of supporting libraries that helps you tackle complexity in large Single-Page Applications.
In this session we'll start with a look at how VueJS stacks up against the competition. We will explore VueJs from incremental adoption to building a full SPA. We'll the core concepts and capabilities and take a look at the growing ecosystem around it.
You understand the basics: The core vue library, templates, styling and components. You understand the how and why of Vuejs. What's next?
In this session we start with some core best practices for the vue ecosystem. We dive deeper into some capabilites of the core Vue library, as well as branching out into some of the offical add-ons; specifically:
Vutify, the Vue CLI, vue-router, and nuxt.js.
An evolutionary architecture supports incremental, guided change along multiple dimensions.
For many years, software architecture was described as the “parts that are hard to change later”. But then microservices showed that if architects build evolvability into the architecture, change becomes easier. This talk, based on my upcoming book, investigates the family of software architectures that support evolutionary change, along with how to build evolvable systems. Understanding how to evolve architecture requires understanding how architectural dimensions interact; I describe how to achieve appropriate coupling between components and services. Incremental change is critical for the mechanics of evolution; I cover how to build engineering and DevOps practices to support continuous change. Uncontrolled evolution leads to undesirable side effects; I cover how fitness functions build protective, testable scaffolding around critical parts to guide the architecture as it evolves.
The software development ecosystem exists in a state of dynamic equilibrium, where any new tool, framework, or technique leads to disruption and the establishment of a new equilibrium. Predictability is impossible when the foundation architects plan against changes constantly in unexpected ways. Instead, prefer evolvability over predictability. This keynote illustrates how to achieve evolutionary architectures and how to retrofit existing systems to support better evolution.
Building Evolutionary Architectures requires identifying and creating architectural fitness functions. This hands-on workshop defines fitness functions and provides group exercises to help identify and discover them.
According to the Building Evolutionary Architectures book, an architectural fitness function provides an objective integrity assessment of some architectural characteristic(s). This hands-on workshop provides examples of fitness functions and group exercises to identify, define, and implement a variety of fitness functions: atomic, holistic, continuous, triggered, temporal, and others.
Stories and lessons from architecture, design, process, and other sources, each illustrating important principles and pitfalls for modern architects.
Those who cannot remember the past are condemned to repeat it. –George Santayana
The past is never dead. It's not even past. –William Faulkner
Most developers pursue the Latest and Greatest with intense fervor, yet the history of engineering, including software projects, contains rich lessons that we risk repeating ad nauseam. This session recounts a variety of stories of projects that failed architecturally…and why. Ranging from the Vasa in 1628 to Knight Capital in 2012, each story tells of a mistaken interpretation of some architectural fundamental principle and the consequences–some good, some less so. I I also look at the common threads for these stories, which resonates with problems many companies have but don't realize.
null
Machine Learning is a huge, deep field. Come get a head start on how you can learn about how machines learn.
This talk will be an overview of the Machine Learning field. We’ll cover the various tools and techniques that are available to you to solve complex, data-driven problems. We’ll walk through the algorithms and apply them to some real but accessible problems so you can see them at work.
Documents contain a lot of information. We'll introduce you to a variety of techniques to extract them.
Machine Learning techniques are useful for analyzing numeric data, but they can also be useful for classifying text, extracting content and more. We will discuss a variety of open source tools for extracting the content, identifying elements and structure and analyzing the text can be used in distributed, microservice-friendly ways.
This open source machine learning framework from Google has taken off. Come learn what you can do with it in your own organization.
TensorFlow is a powerful data flow-oriented machine learning framework developed by Google's Brain Team. It was designed to be easy to use and widely applicable on both numeric, neural network-oriented problems as well as other domains. We'll cover the over view as well as apply it to several fun, realistic problems.
Deep Learning is an evolution of the capabilities of more conventional machine learning to take advantage of the extra data available from Big Data systems. With more data, many of the manual aspects of feature selection and other machine learning steps can be derived automatically. We will highlight many of the main deep learning frameworks and give you a hands on introduction to what is possible and how you can start to use them.
We will cover:
What happens if web applications got really fast?
We are increasingly able to do more in the browser because of faster networks, optimized JavaScript engines, new standard APIs and more. There is a new initiative to allow a binary format called WebAssembly that will provide a compiled, cross-platform representation that will take us to the next level. Complex business applications and 3D video games will alike will benefit from this new standard. Come hear about what it can do for you.
How do you create creativity? This talk offers techniques and perspectives to discover, grow, and project your ideas.
Where do ideas for new talks, books, videos, software…ultimately, everything…come from? A common question at conference panels to the speakers is “Where do you get your ideas for talks?” This session answers that question, along with how some of us cultivate new ideas. This talk investigates three aspects of ideation:
— How do new ideas arise?<br>
I cover various ways of synthesizing new ideas: switching axiom(s), mutation, oblique strategies, and a host of other techniques to generate a germ of a new idea
— How do ideas grow into intellectual property?<br>
I cover techniques for iterating on ideas to discover deeper meanings and connections. I also cover how techniques to evolve and grow ideas.
— How do you communicate new IP?<br>
I cover various ways to convey IP: presentations, articles, books, videos, and a host of other media. I talk about how writing and presenting techniques to amplify your new idea and get it out into the world.
One key to building new IP is separating ideation, organization, and realization, which often become jumbled. By separating them, we can build practices to allow each to flourish. This talk provide concrete advice in each area to help realize new ideas.
Creativity is the reason we have all the Cool Stuff we have. This session investigates how to fan the spark of an idea into a roaring flame of intellectual erudition.
Micronaut is a modern, JVM-based, full-stack framework for building modular, easily testable microservice applications.
In this session we'll dive deep into Micronaut, it's strengths, capabilities and best practices when building & testing services, functions and reactive apps.
Event Storming is a low-tech and powerful technique for creating models of business processes that translate directly into the realm of Domain-Driven Design. All you need is a combination of domain experts and engineers, a large wall to act as a modeling surface, several colors of sticky notes, and markers.
Event Storming is much better caught rather than taught, so in this session, we’ll have the opportunity to work together to understand and model an exemplary problem domain in small groups. You’ll need to bring a laptop to this session and create a free Realtime Board account to use as a virtual modeling surface that you can take away from the session.
Ansible, like Git, aims to be a simple tool.
The benefit here is that the level of abstraction that Ansible offers is paper-thin, with no complicated workflows, or opinions enforced by the tool itself.
The downside is that without a prescribed approach to Ansible, developing your playbooks often becomes a case of trial-and-error.
As engineers steeped in the DevOps mindset we must be able to use the tool effectively, allowing us to accelerate and shorten the lead time from development to production.
In this session we will take a look at some lessons learned when working with Ansible. Topics covered:
We developers really like code.
Code, being plain-text, can be version-controlled, versioned, and follow a traditional SDLC lifecycle.
For the longest time however, we were forced to live with having most of our Ci/Cd and server configurations live outside of our codebases, often at the mercy of infrastructure/operations teams.
With the evolution of DevOps comes the notions of constructs like IaaC (Infrastructure-As-A-Code), and with Jenkins 2.0, we can now manage our Jenkins jobs configurations as code!
In this session we will explore the concept of “Pipelines-As-A-Code”, including the DSL that Jenkins offers, and how we can use this to configure Jenkins jobs via simple, version-controlled Jenkins files. We will see how we can create Jenkins jobs by autodiscovering repositories, as well as when we branch our code to create releases.
How did Amazon, known to the world as an online retailer, manage to make cloud services a multi-billion dollar subsidiary? What is it that AWS offers that makes it so enticing for both solo development, as well as large enterprises like Netflix?
In this session we will take our first baby steps towards AWS. We will take a quick glance at what AWS is and what it has to offer, and then see what it takes to lay out a minimal setup for an a simple application which includes setting up a compute, backed by a database, DNS and routing, as well as leveraging additional services like S3 for storage.
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
An integral part to any DevOps effort involves automation. No longer do we wish to manage tens, hundreds or even thousands of servers by hand, even if that were possible. What we need is a programmatic way to create and configure servers, be those for local development, all the way to production.
This is where tools like Ansible come into play. Ansible offers us a way to define what our server configurations are to look like using plain-text, version-controlled configuration files.
Not only does this help with avoiding “snow-flakes”, but it promotes server configuration to participate in the SDLC, pulling server configuration closer to the developers.
In this session we will explore what Ansible has to offer, decipher the Ansible terminology, and run some examples to configure a local server.
We live, and operate in a world where our services do not need to be available 24/7, but also respond to surges in demand, and scale down when demands are less.
Reactive systems, and reactive architectures have arisen to address this very concern.
Vert.x is a ployglot toolkit that makes writing reactive applications on the JVM possible.
With a non-blocking, event driven architecture, Vert.x can help you scale your application, leveraging the multi-threaded nature of the JVM, and the multi-core abilities of your hardware.
In this session we will take a look at Vert.x, it's programming model and architecture.
We will write some code, and explore the ecosystem around Vert.x, as well as discuss some of the gotchas that you might come across as you develop Vert.x applications.
You have used Regular Expressions before, and they don't faze you one bit. But you know lurking beneath is something even more powerful, and you want to know how to leverage that power. Well fear not! This is just the session you want. We will take a deep dive into Regular Expressions and walk away having understood how to truly take advantage of some of Regular Expressions more esoteric features.
You have used Regular Expressions before, and they don't faze you one bit. But you know lurking beneath is something even more powerful, and you want to know how to leverage that power. Well fear not! This is just the session you want. We will take a deep dive into Regular Expressions and walk away having understood how to truly take advantage of some of Regular Expressions more esoteric features.
In this session we will get acquainted with Docker. We will discuss what docker is, how to install it, and how to start using Docker. We will also explore some of the benefits of containerizing your applications.
Containers are taking over the world. Containers provide a means to have hermatic builds of your software, allowing for truly immutable testing, and delivery of your software. Docker is one of many containerization technologies, and in this session we will take a brief look at Docker and what it has to offer.
Jez Humble and David Farley unleashed a revolution with their prolific book “Continuous Delivery”, leading to the evolution of a movement aptly named DevOps.
DevOps is about bring development and operations closer together, letting developers manage their releases and operationalize their applications in production, leaving operations to do what they do best.
In this session what DevOps really entails — what does it mean to embrace DevOps, what changes should we be expecting, what kind of tooling should we be thinking about, and finally, see what are the benefits of embracing DevOps.
If you are interested in a different approach to writing your next micro-service, or are knee deep in the DevOps world with Kubernetes and Docker (both written using Go) you need to know go.
Come join me in a rather quick introduction to the language and it's merits and short-comings.
Micro-services, DevOps, command-line utilities — Go has been the catalyst in a quiet revolution happening right under our noses. Go, from Google, aims to be a language that is simple, with the aim of writing scalable and reliable software. Go brings a unique tilt to many aspects of language design, including enforcing a strict project structure, powerful tooling to support things like code-style enforcement, as well as “goroutines” to allow for concurrency.
While the Web itself has strong decentralized aspects to how it is used, the backend technologies are largely centralized. The naming systems, the routing systems and the traffic that all points back to the same place for a website are all centralized technologies. This creates both a liability as well as a control point.
In order to break free of some of these limitations, new technologies are emerging to provide a more decentralized approach to the Web.
This talk will walk you through some emerging technology to provide decentralized content storage and distribution, edge computing and more. We will touch upon the Interplanetary Filesystem, WebTorrent, Blockchain spin offs and more.