Monoliths are out and microservices are in. Not so fast. Many of the benefits attributed uniquely to microservices are actually a byproduct of other architectural paradigms with modularity at their core. In this session, we’ll look at several of the benefits we expect from today’s architectures and explore these benefits in the context of various modern architectural paradigms. We’ll also examine different technologies that are applying these principles to build the platforms and frameworks we will use going forward.
Along the way, we’ll explore how to refactor a monolithic application using specific modularity patterns and illustrate how an underlying set of principles span several architectural paradigms. The result is an unparalleled degree of architectural agility to move between different architectural paradigms.
Big architecture up front is not sustainable in today's technology climate where expectations are high for delivering high quality software more quickly than ever before. To accept change, teams are moving to agile methods. But agile methods provide little architectural guidance. Attempts to define the architectural vision for a system early in the development lifecycle does not work. In this session, we provide practical guidance for software architecture for agile projects.
We will explore several principles that help us create more flexible and adaptable software systems. We’ll expose the true essence of what’s meant when we say “architectural agility.” And we’ll explore the real goal of software architecture and how we can accommodate architectural change to help increase architectural agility.
Microservice architecture is a modern architectural approach that increases development and delivery agility by focusing on building modular services. The framework we use has a tremendous impact on how quickly and easily we can deliver servcies. New frameworks are emerging that embrace new approaches for helping us deliver microservices.
In this session, we will explore some modern Java frameworks for building microservices (aka micro frameworks). Example frameworks you may see include Dropwizard, Ratpack, Spark, Ninja, RestExpress, Play, Restlet, and RestX. We'll demonstrate each framework by using a programming kata to build the same service using several different frameworks. Optionally, bring your own laptop, clone the github repo, and you can build the services along with me. To do this, you must have Java 8+ and Gradle.
The way we build and deliver software is changing. We must deliver software more quickly than ever before and traditional approaches to software architecture, infrastructure and methodology do not allow us to meet demand. We’ve reached the limits of agility through process improvement alone, and further increases demand we focus on improving architecture, infrastructure, and methodology simultaneously. 12 Factor is an app development methodology for building modern apps in the modern era.
Building modern apps requires modern methods and 12 Factor is an app development methodology that helps development teams build software by emphasizing development practices that meld together modern architectural paradigms with agile practices like continuous delivery for deployment to cloud platforms. In this session, we’ll examine the 12 Factors and explore how to apply them to apps built using Java.
Java 9 with the Jigsaw module system is here. In this session, we'll explore the basics of the Jigsaw module system and examine the impact it will have on how we build Java applications. We will dig into it's major features, including dependency management and Jigsaw services. Once we understand Jigsaw's basics, we will explore what it's going to take to migrate existing Java application to Java 9 and leverage Jigsaw.
Jigsaw's impact stands to be consequential. Jigsaw will restrict application code from accessing non-published JDK classes (ie. sun.com), require you to be explicit in declaring your dependencies, and more. We will explore Jigsaw basics and then dig into the impact Jigsaw will have on migrating existing Java applications to Java 9.
An evolutionary architecture supports incremental, guided change along multiple dimensions.
For many years, software architecture was described as the “parts that are hard to change later”. But then microservices showed that if architects build evolvability into the architecture, change becomes easier. This talk, based on my upcoming book, investigates the family of software architectures that support evolutionary change, along with how to build evolvable systems. Understanding how to evolve architecture requires understanding how architectural dimensions interact; I describe how to achieve appropriate coupling between components and services. Incremental change is critical for the mechanics of evolution; I cover how to build engineering and DevOps practices to support continuous change. Uncontrolled evolution leads to undesirable side effects; I cover how fitness functions build protective, testable scaffolding around critical parts to guide the architecture as it evolves.
The software development ecosystem exists in a state of dynamic equilibrium, where any new tool, framework, or technique leads to disruption and the establishment of a new equilibrium. Predictability is impossible when the foundation architects plan against changes constantly in unexpected ways. Instead, prefer evolvability over predictability. This keynote illustrates how to achieve evolutionary architectures and how to retrofit existing systems to support better evolution.
Building Evolutionary Architectures requires identifying and creating architectural fitness functions. This hands-on workshop defines fitness functions and provides group exercises to help identify and discover them.
According to the Building Evolutionary Architectures book, an architectural fitness function provides an objective integrity assessment of some architectural characteristic(s). This hands-on workshop provides examples of fitness functions and group exercises to identify, define, and implement a variety of fitness functions: atomic, holistic, continuous, triggered, temporal, and others.
This session covers two critical soft skills for architects:
New architects find soft skills like creating lucid documentation and building compelling presentations challenging. This presentation covers a variety of ways to document ideas in software architecture, ranging from diagramming techniques (that aren't UML) to Architecture Decisions Records and ultimately to presentations. The second part of the talk leverages patterns and anti-patterns from the Presentation Patterns book to help architects build clear and concise representations of their ideas.
How do you create creativity? This talk offers techniques and perspectives to discover, grow, and project your ideas.
Where do ideas for new talks, books, videos, software…ultimately, everything…come from? A common question at conference panels to the speakers is “Where do you get your ideas for talks?” This session answers that question, along with how some of us cultivate new ideas. This talk investigates three aspects of ideation:
— How do new ideas arise?
I cover various ways of synthesizing new ideas: switching axiom(s), mutation, oblique strategies, and a host of other techniques to generate a germ of a new idea
— How do ideas grow into intellectual property?
I cover techniques for iterating on ideas to discover deeper meanings and connections. I also cover how techniques to evolve and grow ideas.
— How do you communicate new IP?
I cover various ways to convey IP: presentations, articles, books, videos, and a host of other media. I talk about how writing and presenting techniques to amplify your new idea and get it out into the world.
One key to building new IP is separating ideation, organization, and realization, which often become jumbled. By separating them, we can build practices to allow each to flourish. This talk provide concrete advice in each area to help realize new ideas.
Creativity is the reason we have all the Cool Stuff we have. This session investigates how to fan the spark of an idea into a roaring flame of intellectual erudition.
Stories and lessons from architecture, design, process, and other sources, each illustrating important principles and pitfalls for modern architects.
Those who cannot remember the past are condemned to repeat it. –George Santayana
The past is never dead. It's not even past. –William Faulkner
Most developers pursue the Latest and Greatest with intense fervor, yet the history of engineering, including software projects, contains rich lessons that we risk repeating ad nauseam. This session recounts a variety of stories of projects that failed architecturally…and why. Ranging from the Vasa in 1628 to Knight Capital in 2012, each story tells of a mistaken interpretation of some architectural fundamental principle and the consequences–some good, some less so. I I also look at the common threads for these stories, which resonates with problems many companies have but don't realize.
null
JavaScript is no longer that scary nasty language. There are so many wonderful features in the modern versions of this language that I can't stop praising it. In this presentation we will focus on some of the most exciting features of the language that make programming in JavaScript a pleasant experience.
We will dive into beautiful parts and features from ES 6, ES 7, and ES 8.
In this presentation, we will dig deeper into creating applications with React.
We will take a deeper look into creating components, dealing with life cycle events, managing state, when and how to use Redux, and how to approach automated testing of the front-end code you create.
JavaScript is a very powerful and ubiquitous language. At the same time, it is also a language that is often misunderstood and misused. Better quality code greatly reduces the effort and time to maintain software. So, there is a good incentive to improve quality of code.
In this presentation we will cover two things. We will discuss dos and don'ts for programming with JavaScript. Then we will look at a set of tools that can help keep an eye on our code and prompt us to improve the quality.
React is a very targeted library for creating rich front end. While React does fairly well what it intends to provide, developing a practical application requires more than using React alone.
In this presentation we will build the front end for a small application that pulls data from a backend and along the way learn how to use React and related libraries to achieve the desired results.
On the 2017 tour, I introduced the notion of “serverless” and Functions as a Service (FaaS) platforms. We understood the motivation for serverless computing, compared serverless to other cloud-native infrastructure approaches, navigated some architectural tradeoffs, and took a whirlwind tour of the Big 3 FaaS providers.
In this 2018 edition of the talk, we’ll still cover a few of the same themes to bring new folks up to speed, but we’ll also look at what’s changed in this ecosystem over the past year, take a look at new or enhanced features, offerings, runtimes, and programming models, and examine what use cases are becoming popular for serverless computing. We’ll also look at how tradeoffs have evolved, and definitely throw in a few demos.
In this presentation, we'll build, test, and deploy an image-processing pipeline using Amazon Web Services such as Lambda, API Gateway, Step Functions, DynamoDB, and Rekognition.
We'll take a look at some of the following topics:
Chaos Engineering, pioneered by Netflix, is the discipline of experimenting on a distributed system in order to build confidence in the system's capability to withstand turbulent conditions in production.
In this presentation, we'll take a look at the problem of building resilient software, and discuss how applying Google's SRE principles and patterns for architectural resiliency can help us to solve it. We'll then examine how the practice of Chaos Engineering can help us to prove or disprove the resiliency of our systems.
All software architectures have to deal with stress. It’s simply the way the world works! Stressors come from multiple directions, including changes in the marketplace, business models, and customer demand, as well as infrastructure failures, improper or unexpected inputs, and bugs. As software architects, one of our jobs is to create solutions that meet both business and quality requirements while appropriately handling stress.
We typically approach stressors by trying to create solutions that are robust. Robust systems can continue functioning properly in the presence of internal and external challenges, but they also have one or more breaking points. When we pass a robust systems known threshold for a particular type of stress, it will fail. When a system encounters an “unknown unknown” challenge, it will usually not be robust!
Recent years have seen new approaches, including resilient, antifragile, and evolutionary architectures. All of these approaches emphasize the notion of adapting to changing conditions in order to not only survive stress but sometimes to benefit from it. In this class, we’ll examine together the theory and practice behind these architectural approaches.
As an architectural style, microservices are here to stay. They have crossed the proverbial chasm, and now it’s time to get to work. Microservices provide us with the ability to create truly evolutionary architectures composed of cohesive and autonomous components using well known and characterized distributed systems patterns.
As we create and compose components across the hard boundary of the network, we become deeply interested in establishing the correct boundaries and has resulted in renewed interest in system design and decomposition. Fortunately, the tried and true practices of Domain-Driven Design are available to us.
In this presentation, we will cover a distillation of strategic (bounded contexts, subdomains, context mapping) and tactical (aggregates, domain events) DDD techniques and demonstrate how they enable us to create effective event-driven microservices.
In this example-driven presentation, you'll learn how to leverage Spring Boot to accelerate application development, enabling you to focus coding on logic that drives application requirements with little concern for code that satisfies Spring's needs.
For over a decade, Spring has sought to make enterprise Java development easier. It began by offering a lighter alternative to EJBs, but continued to to address things such as security, working with various sorts of databases, cloud-native applications, and reactive programming. And, along the way, Spring even took steps to make itself easier to use, offering Java-based and automatic component configuration. Even so, there's still a lot of near-boilerplate code required to develop Spring applications.
Enter Spring Boot. Spring Boot's primary purpose is to make Spring easier to work with. It achieves this in three ways:
All together, Spring Boot lets you focus on fulfilling your application's requirements without worrying about writing code that satisfies the needs of a framework.
In this session, you'll learn how to take your Spring Boot skills to the next level, applying the latest features of Spring Boot. Topics may include Spring Boot DevTools, configuration properties and profiles, customizing the Actuator, and crafting your own starters and auto-configuration.
TBD
In this session, we'll explore the new reactive features in Spring 5 to build reactive, non-blocking applications using Spring's familiar programming model.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 has introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
Serverless is a big topic these days. More specifically, Function as a Service (FaaS) enables small-scale functions deployed in the cloud to be enabled only as needed. Compared to Platform as a Service (PaaS) services which are “always on” and require explicit scaling, FaaS services wake up when needed and are scaled by the platform automatically.
Imagine a service that only gets invoked one per hour. In PaaS, that service is always running and ready to handle requests, consuming processor resources and ultimately costing more to be ready at a moment's notice. In contrast, FaaS services awaken quickly upon invocation and go away when finished, conserving resources and cost.
In this example-driven session, we'll see how to create and deploy Function as a Service (FaaS) services with Spring and Spring Boot.
As a software developer, you've likely come across at least one veteran in our field who has shared tales of the old days when they used punch cards to give instructions to a computer. Thankfully, those days are long gone. Over time, the way we interact with machines has evolved through various stages such as textual, graphical, and touch/gestural user interfaces…up to today where Siri, Alexa, and Google Home are ushering in a new era of voice user interfaces.
In this session, we'll focus on Alexa, the voice assistant present in Amazon's line of Echo devices. We'll explore the Alexa Skills Kit (ASK) and see how to construct voice UIs (known as “skills”) to do our bidding. We'll see how to back those skills using Spring, including Spring MVC and Spring Cloud Function. And maybe, one day, we'll tell future generations about how we once had to actually touch computers.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the team-and management-on board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higher-up in the organization.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several real-world patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
Good discussions are supposed to diverge from their intended path. Free association is a feature, not a bug, and helps you see new connections between ideas. Without structure, however, it can be difficult to add context to new ideas and understand how they relate to more immediate problems. This talk discusses the technique of mental bookmarks – how to remember where you were when a discussion diverged. In addition to giving you a reputation for having an amazing memory, the skill also helps with personal awareness in general.
To give the technique context, we'll look at the fractal nature of success – the way we tend to see our current environment in relative terms, always comparing ourselves to those slightly more successful and slightly less successful.
Everything in IT changes constantly: business, technology, practices, and so on. This keynote investigates techniques that allow architects and developers to build systems that support rather than avoid change.
The only constant in IT is change: Business practices change, tools and frameworks evolve, and wholly new tools and techniques appear on a regular basis. How can developers develop and architects architect in an environment like this? This keynote highlights techniques to support constant change, including evolutionary architecture, immutable infrastructure, coding techniques, and better ways to gather requirements. I also cover flexible governance models, evolutionary data, and adaptability. This keynote covers the breadth of modern software development, packed with advice on how to build systems that embrace rather than avoid change.
Reactive Programming is no longer an esoteric idea. Many libraries and tools have started providing excellent support for this idea of programming. So much that Java 9 has embraced this idea by including it into the specification. In this presentation we will take a dive into what reactive programming looks like from the Java specification point of view.
Reactive Programming
What problem does it solve
How to use it from Java point of view?
What can you do to make good use of it.
After years of stagnation, Java is one of the most vibrant languages of current times. Java 8 saw several improvements and Java 9 a few more. Well, that's only the beginning. Come to this talk to learn about where Java is heading, what's brewing in the language and the ecosystem, well beyond Java 9.
We will look ahead into the next versions of Java, what you can expect, and how you can get ready.
Creating code is easy, creating good code takes a lot of time, effort, discipline, and commitment. The code we create are truly the manifestations of our designs. Creating a lightweight design can help make the code more extensible and reusable.
In this presentation we will take an example oriented approach to look at some core design principles that can help us create better design and more maintainable code.
Everyone (including your humble speaker) talks about Modularization. For once, let's focus on other interesting and exciting parts of Java 9 that does not involve modularization. In this presentation we will talk about key improvements and additions to the Java language and the JDK introduced in Java 9.
We will focus on first things that can be done better in Java 9 compared to previous versions on Java.
Then we will look at things we simply could not do before but we can benefit from now.
And finally, we will not talk about Modularization, well I lied, will discuss that just a bit.
Apache Spark is the fast data processing of large document stores and databases. Spark is highly distributed, optimized, and redundant for large clustering manipulation and aggregation.
This talk is an introduction to Apache Spark, it's architecture, and it's programming API. We start with an introduction to DataFrames, the Catalyst Optimizer, and Spark SQL. We will then venture onto DataSets, discuss the DataSet API and the functional programming aspects of it. We will touch lightly on RDD and the pros and cons of using the API. We will then finish with how to connect to data sources like HDFS, S3, Cassandra, Elastic Search, and Kafka. This presentation will have samples that you can try out at home or at the office.
Spark Streaming is one of the few additions that are available with Spark that uses its internal architecture and creates a Streaming processing framework to process data in real time.
In this presentation, we will start with a small reintroduction to Spark and it's architecture and what it does. Then we delve into streaming, what purpose does it serve, how to set up Spark Streaming and how to use it. We will discuss how to set it with time, how the internals work, and how also to integrate it with Kafka. We then will talk about some of the more high-end features like checkpointing, and windowing.
Serialization is important for anything Big Data. We need to send information over the wire and we need to do so efficiently. This core concept presentation covers various serialization techniques and libraries. That way you can use Akka, Kafka, Spark, and various MQs efficiently
For this presentation, we will focus on three topics: Core Java Serialization, Apache Avro, and Google's Protocol Buffers. We will review serialization and deserialization ease of use, review performance, and how to integrate with Akka, Kafka, Spark.
Kafka has captured mindshare in the data records streaming market, and in this presentation, we knock on its door and see what lies behind. What is the draw? What makes it an attractive addition? How does it compare to Message Queues and other message streaming services?
We will do a thorough introduction into what is Kafka. We will also discuss Consumers, Producers, Streams, and Connectors. Integration with ZooKeeper, and discuss the performance aspect of using Kafka.
Gradle is the build tool of choice in the open source world, and rapidly becoming the standard in industry as well. Anyone who works with Gradle on a Java project knows the basics of the Java plugin and how to write simple tasks in Groovy. Gradle can do much more, however. This talk will demonstrate how to write your own custom task classes and how to create Gradle plugins from them. Other Gradle features will be demonstrated as well, including file manipulation, incremental builds, generating the Grade wrapper, and resolving conflicts in dependencies.
Gradle Inc also provides a free build scan capability to analyze build files. This too will be demonstrated, as well as profiling your build, determining dependencies, and more.
Kotlin is a practical language designed for the JVM. It focuses on pragmatism and safety, with an emphasis on interoperability and tool support. Kotlin is statically typed and includes null checks in the type system itself. The language runs anywhere Java does, from microservices to Android apps. This workshop will show how to use Kotlin to simplify anything you originally planned to do in Java.
Kotlin combines object-oriented programming with functional features like lambda expressions, functional types, and data classes. Since it comes from JetBrains, the company that produces the IntelliJ IDEA development environment, it's tool support is excellent. IntelliJ even provides a Java-to-Kotlin converter to help you get started.
This presentation will cover the basic types, defining and calling functions, programming with lambdas, higher order functions, and DSLs. The goal is to give you enough background to be productive while you continue to learn and grow with the language.
JavaScript is one of the most powerful and yet perilous languages. It is arguably the most ubiquitous language, the king of the client side programming the web. The dynamic nature of the language offers many benefits, but the weak typing and lack of tooling can make changes unpredictable, that is, if we're not careful. Automated testing can greatly alleviate the pains and lead to more fruitful development.
In this presentation we will take look at the tools, but also the approach to automate the tests for JavaScript code. We will start with simple cases at first, but quickly dive into dealing with dependencies, and measuring code coverage.
In this session we will take a look at building applications with Angular. We will build a very simple application from the ground up, and attempt to understand the approach of Angular, as well as understand some of the terminology that Angular introduces.
This session will focus on the Angular 7
TypeScript, Components, Annotations/Directives, Observables, Reactive Stores, Model-Driven forms … Oh my! Angular, much like AngularJs (1.x.x), despite being a powerful platform for building rich client side applications, comes laden with both new terminology, and a “newer” approach to writing client side code.
In this session, as we build a simple application, we will attempt to tease apart these concepts, slowly building our understanding towards how these pieces come together, and how we can leverage them to build rich client side application.
Details
angular-cli
generatesAlong the way we will see how to use the Angular style guide to follow conventions adopted by the Angular community at large, and some ways to use the angular-cli
tool.
In this session we will take a look at building applications with Angular. We will build a very simple application from the ground up, and attempt to understand the approach of Angular, as well as understand some of the terminology that Angular introduces.
This session will focus on the Angular 7
TypeScript, Components, Annotations/Directives, Observables, Reactive Stores, Model-Driven forms … Oh my! Angular, much like AngularJs (1.x.x), despite being a powerful platform for building rich client side applications, comes laden with both new terminology, and a “newer” approach to writing client side code.
In this session, as we build a simple application, we will attempt to tease apart these concepts, slowly building our understanding towards how these pieces come together, and how we can leverage them to build rich client side application.
Details
pipes
in AngularAlong the way we will see how to use the Angular style guide to follow conventions adopted by the Angular community at large, and some ways to use the angular-cli
tool
An integral part to any DevOps effort involves automation. No longer do we wish to manage tens, hundreds or even thousands of servers by hand, even if that were possible. What we need is a programmatic way to create and configure servers, be those for local development, all the way to production.
This is where tools like Ansible come into play. Ansible offers us a way to define what our server configurations are to look like using plain-text, version-controlled configuration files.
Not only does this help with avoiding “snow-flakes”, but it promotes server configuration to participate in the SDLC, pulling server configuration closer to the developers.
In this session we will explore what Ansible has to offer, decipher the Ansible terminology, and run some examples to configure a local server.
Ansible, like Git, aims to be a simple tool.
The benefit here is that the level of abstraction that Ansible offers is paper-thin, with no complicated workflows, or opinions enforced by the tool itself.
The downside is that without a prescribed approach to Ansible, developing your playbooks often becomes a case of trial-and-error.
As engineers steeped in the DevOps mindset we must be able to use the tool effectively, allowing us to accelerate and shorten the lead time from development to production.
In this session we will take a look at some lessons learned when working with Ansible. Topics covered:
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
Traditionally Java has supported multithreading. Asynchronous programming in Java is a bit of novelty. Java 8 introduced CompletableFuture to bring the features of JavaScript promises to the Java land. In this presentation we will learn about this feature and how to make use of it.
CompletableFuture
Ways to create
Using Chaining
Propagating results
Dealing with errors
Using timeouts
Programming with multiple asynchronous tasks
This talk will focus on interesting features of Java 8 that go beyond the basics. Topics will include:
map
, filter
, and flatMap
methodsOptional
as intendedjava.time
packageSample code will be provided to illustrate all the techniques, along with tests and a build file.
Vert.x is one of many reactive offerings on the JVM. Vert.x has everything for reactive to messaging, handlers for data, to even converters for other streaming frameworks like RX Java.
In this session we will discuss the basics of Vert.x and how it's API is structured followed by the applications that can be made with Vert.x
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single meta-answer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
Big Data is exploding into our industry and these skills are increasingly in demand. Much of our existing body of knowledge needs to be updated, however as Big Data solutions present a number of unique challenges. In this half-day workshop we start with the question “What is Big Data?” and “How do big data solutions differ from traditional data architectures.” We then dive into core Hadoop technologies to tackle your Big Data challenges and dive into several tools and technologies for distributed processing of large datasets.
The Big Data Space is rapidly changing and with a seemingly dizzying number of technology choices, one might not know where to begin. Our goal with this half-day workshop is to give you an overview of a number of technologies, their respective strengths and weaknesses, and enough knowledge to make informed decisions and built great tech!
Big Data is exploding into our industry and these skills are increasingly in demand. Much of our existing body of knowledge needs to be updated, however as Big Data solutions present a number of unique challenges. In this half-day workshop we start with the question “What is Big Data?” and “How do big data solutions differ from traditional data architectures.” We then dive into core Hadoop technologies to tackle your Big Data challenges and dive into several tools and technologies for distributed processing of large datasets.
The Big Data Space is rapidly changing and with a seemingly dizzying number of technology choices, one might not know where to begin. Our goal with this half-day workshop is to give you an overview of a number of technologies, their respective strengths and weaknesses, and enough knowledge to make informed decisions and built great tech!
We developers really like code.
Code, being plain-text, can be version-controlled, versioned, and follow a traditional SDLC lifecycle.
For the longest time however, we were forced to live with having most of our Ci/Cd and server configurations live outside of our codebases, often at the mercy of infrastructure/operations teams.
With the evolution of DevOps comes the notions of constructs like IaaC (Infrastructure-As-A-Code), and with Jenkins 2.0, we can now manage our Jenkins jobs configurations as code!
In this session we will explore the concept of “Pipelines-As-A-Code”, including the DSL that Jenkins offers, and how we can use this to configure Jenkins jobs via simple, version-controlled Jenkins files. We will see how we can create Jenkins jobs by autodiscovering repositories, as well as when we branch our code to create releases.