Monoliths are out and microservices are in. Not so fast. Many of the benefits attributed uniquely to microservices are actually a byproduct of other architectural paradigms with modularity at their core. In this session, we’ll look at several of the benefits we expect from today’s architectures and explore these benefits in the context of various modern architectural paradigms. We’ll also examine different technologies that are applying these principles to build the platforms and frameworks we will use going forward.
Along the way, we’ll explore how to refactor a monolithic application using specific modularity patterns and illustrate how an underlying set of principles span several architectural paradigms. The result is an unparalleled degree of architectural agility to move between different architectural paradigms.
Big architecture up front is not sustainable in today's technology climate where expectations are high for delivering high quality software more quickly than ever before. To accept change, teams are moving to agile methods. But agile methods provide little architectural guidance. Attempts to define the architectural vision for a system early in the development lifecycle does not work. In this session, we provide practical guidance for software architecture for agile projects.
We will explore several principles that help us create more flexible and adaptable software systems. We’ll expose the true essence of what’s meant when we say “architectural agility.” And we’ll explore the real goal of software architecture and how we can accommodate architectural change to help increase architectural agility.
Microservice architecture is a modern architectural approach that increases development and delivery agility by focusing on building modular services. The framework we use has a tremendous impact on how quickly and easily we can deliver servcies. New frameworks are emerging that embrace new approaches for helping us deliver microservices.
In this session, we will explore some modern Java frameworks for building microservices (aka micro frameworks). Example frameworks you may see include Dropwizard, Ratpack, Spark, Ninja, RestExpress, Play, Restlet, and RestX. We'll demonstrate each framework by using a programming kata to build the same service using several different frameworks. Optionally, bring your own laptop, clone the github repo, and you can build the services along with me. To do this, you must have Java 8+ and Gradle.
The way we build and deliver software is changing. We must deliver software more quickly than ever before and traditional approaches to software architecture, infrastructure and methodology do not allow us to meet demand. We’ve reached the limits of agility through process improvement alone, and further increases demand we focus on improving architecture, infrastructure, and methodology simultaneously. 12 Factor is an app development methodology for building modern apps in the modern era.
Building modern apps requires modern methods and 12 Factor is an app development methodology that helps development teams build software by emphasizing development practices that meld together modern architectural paradigms with agile practices like continuous delivery for deployment to cloud platforms. In this session, we’ll examine the 12 Factors and explore how to apply them to apps built using Java.
Java 9 with the Jigsaw module system is here. In this session, we'll explore the basics of the Jigsaw module system and examine the impact it will have on how we build Java applications. We will dig into it's major features, including dependency management and Jigsaw services. Once we understand Jigsaw's basics, we will explore what it's going to take to migrate existing Java application to Java 9 and leverage Jigsaw.
Jigsaw's impact stands to be consequential. Jigsaw will restrict application code from accessing non-published JDK classes (ie. sun.com), require you to be explicit in declaring your dependencies, and more. We will explore Jigsaw basics and then dig into the impact Jigsaw will have on migrating existing Java applications to Java 9.
Reactive architecture patterns allow you to build self-monitoring and self-healing systems that can react to both internal and external conditions without human intervention. How would you like to design systems that can automatically grow as the business grows, automatically handle varying load (cyber Monday?), and automatically handle (and repair) internal and external errors, all without human interaction? I'll show you how to do this with your current technology stack (no special languages, tools, frameworks, or products). In this two-part session I will leverage both slides and live coding using Java and RabbitMQ to describe and demonstrate how to build reactive systems. Get ready for the future of software architecture - that you can start implementing on Monday.
Part 1 Agenda:
Reactive architecture patterns allow you to build self-monitoring and self-healing systems that can react to both internal and external conditions without human intervention. How would you like to design systems that can automatically grow as the business grows, automatically handle varying load (cyber Monday?), and automatically handle (and repair) internal and external errors, all without human interaction? I'll show you how to do this with your current technology stack (no special languages, tools, frameworks, or products). In this two-part session I will leverage both slides and live coding using Java and RabbitMQ to describe and demonstrate how to build reactive systems. Get ready for the future of software architecture - that you can start implementing on Monday.
Part 2 Agenda
There are many different uses for Apache Kafka. It can be used as a streaming broker, event broker for transactional data, and even a database. This session is about understanding streaming architecture and how to implement it using Apache Kafka. I start this session by talking about some of the streaming architecture patterns, then dive into how Apache Kafka works using the Core API. Using live coding examples in Apache Kafka, I also talk about the differences between Kafka and regular messaging (RabbitMQ, ActiveMQ, etc.) and when you should use each. I end this session by putting everything together, showing an actual streaming architecture using Kafka within a Microservice ecosystem for gathering various metrics for business and operational monitoring and reporting.
Agenda:
One of the expectations of any software architect is to analyze the current technology environment and recommend solutions for improvement. This is otherwise known as continually assessing architecture vitality. Too many times software architects fail to regularly perform this task, leading to emergency refactoring efforts to save a troubled system from failure. The question is, what does it mean to assess an application architecture? In this session we will explore static analysis metrics and tools and techniques for leveraging those metrics for determining structural decay. Using a real-world large-scale application, I'll show you how to leverage code metrics to find (and fix) structural decay before it gets you into trouble.
Agenda
It seems like all we talk about these days is making our architectures more modular. Buy why? In this session I will discuss the drivers and reasons why it is essential to move towards a level of modularity in our architectures. I will discuss and show real-world use cases of distributed modular architectures (specifically microservices and service-based architecture), and then discuss in detail the core differences between microservices and service-based architecture and when you should consider each. I'll end the talk by discussing the most effective way of migrating to modular distributed architectures.
Agenda:
Everyone (including your humble speaker) talks about Modularization. For once, let's focus on other interesting and exciting parts of Java 9 that does not involve modularization. In this presentation we will talk about key improvements and additions to the Java language and the JDK introduced in Java 9.
We will focus on first things that can be done better in Java 9 compared to previous versions on Java.
Then we will look at things we simply could not do before but we can benefit from now.
And finally, we will not talk about Modularization, well I lied, will discuss that just a bit.
Reactive Programming is no longer an esoteric idea. Many libraries and tools have started providing excellent support for this idea of programming. So much that Java 9 has embraced this idea by including it into the specification. In this presentation we will take a dive into what reactive programming looks like from the Java specification point of view.
Reactive Programming
What problem does it solve
How to use it from Java point of view?
What can you do to make good use of it.
After years of stagnation, Java is one of the most vibrant languages of current times. Java 8 saw several improvements and Java 9 a few more. Well, that's only the beginning. Come to this talk to learn about where Java is heading, what's brewing in the language and the ecosystem, well beyond Java 9.
We will look ahead into the next versions of Java, what you can expect, and how you can get ready.
In this session we will take a look at building applications with Angular. We will build a very simple application from the ground up, and attempt to understand the approach of Angular, as well as understand some of the terminology that Angular introduces.
This session will focus on the Angular 10
TypeScript, Components, Annotations/Directives, Observables, Reactive Stores, Model-Driven forms … Oh my! Angular, much like AngularJs (1.x.x), despite being a powerful platform for building rich client side applications, comes laden with both new terminology, and a “newer” approach to writing client side code.
In this session, as we build a simple application, we will attempt to tease apart these concepts, slowly building our understanding towards how these pieces come together, and how we can leverage them to build rich client side application.
Details
angular-cli
generatesAlong the way we will see how to use the Angular style guide to follow conventions adopted by the Angular community at large, and some ways to use the angular-cli
tool.
In this session we will take a look at building applications with Angular. We will build a very simple application from the ground up, and attempt to understand the approach of Angular, as well as understand some of the terminology that Angular introduces.
This session will focus on the Angular 10
TypeScript, Components, Annotations/Directives, Observables, Reactive Stores, Model-Driven forms … Oh my! Angular, much like AngularJs (1.x.x), despite being a powerful platform for building rich client side applications, comes laden with both new terminology, and a “newer” approach to writing client side code.
In this session, as we build a simple application, we will attempt to tease apart these concepts, slowly building our understanding towards how these pieces come together, and how we can leverage them to build rich client side application.
Details
pipes
in AngularAlong the way we will see how to use the Angular style guide to follow conventions adopted by the Angular community at large, and some ways to use the angular-cli
tool
Angular leverages a whole slew of technologies — from TypeScript, to RxJs to Protractor for E2E testing, all within reach via the Angular CLI. All of these tools come with their own tooling, and knowing how to use these tools independently, and in concert is what makes us effective Angular developers.
In this session we will first look at the tooling for each of these pieces individually. We will see what configuration options are available to us, and then see how to leverage the Angular CLI to bring all of these together. We will also see some of the features that the Angular CLI offers us to decrease our time from development to production
An integral part to any DevOps effort involves automation. No longer do we wish to manage tens, hundreds or even thousands of servers by hand, even if that were possible. What we need is a programmatic way to create and configure servers, be those for local development, all the way to production.
This is where tools like Ansible come into play. Ansible offers us a way to define what our server configurations are to look like using plain-text, version-controlled configuration files.
Not only does this help with avoiding “snow-flakes”, but it promotes server configuration to participate in the SDLC, pulling server configuration closer to the developers.
In this session we will explore what Ansible has to offer, decipher the Ansible terminology, and run some examples to configure a local server.
Ansible, like Git, aims to be a simple tool.
The benefit here is that the level of abstraction that Ansible offers is paper-thin, with no complicated workflows, or opinions enforced by the tool itself.
The downside is that without a prescribed approach to Ansible, developing your playbooks often becomes a case of trial-and-error.
As engineers steeped in the DevOps mindset we must be able to use the tool effectively, allowing us to accelerate and shorten the lead time from development to production.
In this session we will take a look at some lessons learned when working with Ansible. Topics covered:
In this session we will get acquainted with Docker. We will discuss what docker is, how to install it, and how to start using Docker. We will also explore some of the benefits of containerizing your applications.
Containers are taking over the world. Containers provide a means to have hermatic builds of your software, allowing for truly immutable testing, and delivery of your software. Docker is one of many containerization technologies, and in this session we will take a brief look at Docker and what it has to offer.
In this session we will dive deeper into Dockerfiles. We will explore the DSL that Dockerfiles provide to allow for the automation of image creation.
Dockerfiles provide a means to automate the creation of images, and consequently the containers within which our applications run. The Dockerfile, though minimal, provides us with everything we need to package our software, and enable it to run. In this session we will dive deep into the Docker DSL, and explore the many commands that it provides, and along the way explore some differences between similar commands, and some gotchas.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several realworld patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single metaanswer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
Think back over your career as a software developer. How many great–and I mean truly great–software developers have you worked with? The chances are you can count them on one hand? Developers with 20-30 years experience can rarely name more than two or three. What does it take to be considered truly great? Michael can certainly tell you what NOT to do… from cold, bitter experience.
Ego Driven Development is a humorous and brutally honest look at a career spanning nearly two decades. We dive into war stories, train wrecks and so many misguided attempts at greatness. There is a path to becoming a truly great software engineer, but it's not always where we first look.
Kafka has captured mindshare in the data records streaming market, and in this presentation, we knock on its door and see what lies behind. What is the draw? What makes it an attractive addition? How does it compare to Message Queues and other message streaming services?
We will do a thorough introduction into what is Kafka. We will also discuss Consumers, Producers, Streams. Integration with ZooKeeper, and discuss the performance aspect of using Kafka.
Apache Spark is the fast data processing of large document stores and databases. Spark is highly distributed, optimized, and redundant for large clustering manipulation and aggregation.
This talk is an introduction to Apache Spark, it's architecture, and it's programming API. We start with an introduction to DataFrames, the Catalyst Optimizer, and Spark SQL. We will then venture onto DataSets, discuss the DataSet API and the functional programming aspects of it. We will touch lightly on RDD and the pros and cons of using the API. We will then finish with how to connect to data sources like HDFS, S3, Cassandra, Elastic Search, and Kafka. This presentation will have samples that you can try out at home or at the office.
Spark Streaming is one of the few additions that are available with Spark that uses its internal architecture and creates a Streaming processing framework to process data in real time.
In this presentation, we will start with a small reintroduction to Spark and it's architecture and what it does. Then we delve into streaming, what purpose does it serve, how to set up Spark Streaming and how to use it. We will discuss how to set it with time, how the internals work, and how also to integrate it with Kafka. We then will talk about some of the more high-end features like checkpointing, and windowing.
Serialization is important for anything Big Data. We need to send information over the wire and we need to do so efficiently. This core concept presentation covers various serialization techniques and libraries. That way you can use Akka, Kafka, Spark, and various MQs efficiently
For this presentation, we will focus on three topics: Core Java Serialization, Apache Avro, and Google's Protocol Buffers. We will review serialization and deserialization ease of use, review performance, and how to integrate with Akka, Kafka, Spark.
Many have already seen what Reactive Streaming can do: RXJava, Akka Streams, Project Reactor. Now reactive streaming is a part of the canonical package for Java and now we can handle asynchronous pipelines with boundaries and make better well thought out applications
This presentation introduces you to the core components of Reactive Streams: Publisher, Subscriber, and Processor. We will discuss when to use Reactive Stream and how to avoid complicated scenarios like Backpressure. We will also cover some of the core problem spaces and how to overcome.
Big Data is exploding into our industry and these skills are increasingly in demand. Much of our existing body of knowledge needs to be updated, however as Big Data solutions present a number of unique challenges. In this half-day workshop we start with the question “What is Big Data?” and “How do big data solutions differ from traditional data architectures.” We then dive into core Hadoop technologies to tackle your Big Data challenges and dive into several tools and technologies for distributed processing of large datasets.
The Big Data Space is rapidly changing and with a seemingly dizzying number of technology choices, one might not know where to begin. Our goal with this half-day workshop is to give you an overview of a number of technologies, their respective strengths and weaknesses, and enough knowledge to make informed decisions and built great tech!
Big Data is exploding into our industry and these skills are increasingly in demand. Much of our existing body of knowledge needs to be updated, however as Big Data solutions present a number of unique challenges. In this half-day workshop we start with the question “What is Big Data?” and “How do big data solutions differ from traditional data architectures.” We then dive into core Hadoop technologies to tackle your Big Data challenges and dive into several tools and technologies for distributed processing of large datasets.
The Big Data Space is rapidly changing and with a seemingly dizzying number of technology choices, one might not know where to begin. Our goal with this half-day workshop is to give you an overview of a number of technologies, their respective strengths and weaknesses, and enough knowledge to make informed decisions and built great tech!
JavaScript is no longer that scary nasty language. There are so many wonderful features in the modern versions of this language that I can't stop praising it. In this presentation we will focus on some of the most exciting features of the language that make programming in JavaScript a pleasant experience.
We will dive into beautiful parts and features from ES 6, ES 7, and ES 8.
React is a very targeted library for creating rich front end. While React does fairly well what it intends to provide, developing a practical application requires more than using React alone.
In this presentation we will build the front end for a small application that pulls data from a backend and along the way learn how to use React and related libraries to achieve the desired results.
In this presentation, we will dig deeper into creating applications with React.
We will take a deeper look into creating components, dealing with life cycle events, managing state, when and how to use Redux, and how to approach automated testing of the front-end code you create.
In this example-driven presentation, you'll learn how to leverage Spring Boot to accelerate application development, enabling you to focus coding on logic that drives application requirements with little concern for code that satisfies Spring's needs.
For over a decade, Spring has sought to make enterprise Java development easier. It began by offering a lighter alternative to EJBs, but continued to to address things such as security, working with various sorts of databases, cloud-native applications, and reactive programming. And, along the way, Spring even took steps to make itself easier to use, offering Java-based and automatic component configuration. Even so, there's still a lot of near-boilerplate code required to develop Spring applications.
Enter Spring Boot. Spring Boot's primary purpose is to make Spring easier to work with. It achieves this in three ways:
All together, Spring Boot lets you focus on fulfilling your application's requirements without worrying about writing code that satisfies the needs of a framework.
In this session, you'll learn how to take your Spring Boot skills to the next level, applying the latest features of Spring Boot. Topics may include Spring Boot DevTools, configuration properties and profiles, customizing the Actuator, and crafting your own starters and auto-configuration.
TBD
In this session, you'll learn how to use Spring Data to rapidly develop repositories for a variety of database types, including relational (JPA and JDBC), document (Mongo), graph (Neo4j), and others (Redis, Cassandra, CouchBase, etc).
For decades, relational databases and SQL have enjoyed their position as the leading choice for data persistence. Even though many alternative database types have emerged in recent years, the relational database is still a top choice for a general purposes data store and will not likely be usurped from its position any time soon. When it comes to working with relational data, Java developers have several options.
But relational databases are not a one-size-fits-all solution. Thankfully, there are many options for data persistence, including relational, document, graph, key-value, and column-store databases, each presenting their unique way of handling data suitable for different problems.
Spring Data makes it easy to work with various databases by offering a programming model that is consistent, regardless of which type of database you're working with. And regardless of the database you're dealing with, you will find that Spring Data eliminates a lot of boilerplate code.
In this session, you'll see how to take Spring Data's automatic repository generation to a whole new level. We'll look at ways to model data and manipulate Spring Data to produce repositories and APIs that are more than just CRUD layers on top of a database.
TBD
In this session, you'll learn how to take advantage of the latest features of Spring Security and Spring Boot to secure an application, both at the web layer and at the method-level.
Information is probably the most valuable item we now have; crooks are looking for ways to steal our data and identities by sneaking into unsecured applications. As software developers, we must take steps to protect the information that resides in our applications. Whether it’s an email account protected with a username/password pair or a brokerage account protected with a trading PIN, security is a crucial aspect of most applications.
Spring Security is an extension to the Spring Framework that provides security to Spring applications both at the HTTP request level as well as at method invocation.
Serverless is a big topic these days. More specifically, Function as a Service (FaaS) enables small-scale functions deployed in the cloud to be enabled only as needed. Compared to Platform as a Service (PaaS) services which are “always on” and require explicit scaling, FaaS services wake up when needed and are scaled by the platform automatically.
Imagine a service that only gets invoked one per hour. In PaaS, that service is always running and ready to handle requests, consuming processor resources and ultimately costing more to be ready at a moment's notice. In contrast, FaaS services awaken quickly upon invocation and go away when finished, conserving resources and cost.
In this example-driven session, we'll see how to create and deploy Function as a Service (FaaS) services with Spring and Spring Boot.
So your server is having issues? memory? Connections? Limited response? Is the first solution to bounce the server? Perhaps change some VM flags or add some logging? In todays Java 9 world, with its superior runtime monitoring and management capabilities the reasons to the bounce the server have been greatly reduced.
This session will look at the Java monitoring and management capabilities, which includes the ability to make VM argument changes on the fly. This session will dive into the different memory compartments and how they are used by the JVM. Final this session will explore the different GC options and how they affect response times and throughput.
This session will include new features in Java 9 to make it more container aware.
Containers and Micro-service are like peanut butter and jelly, they just go together. However do you know how resources are managed in a container and how that affects your application. This session will dive into how Java is affected living under control groups.
This session will review containers, the underlying technology to containers along with different container runtimes such as Docker. We will look at the anatomy of a container including control groups (cgroups) and namespaces. This session will conclude with the challenges of running the JRE in a container and the impact that has on the efficiency of the process at scale.
This session will include container friendly changes to Java 9 around memory and ergonomics. It will also provide advice on how to get consistent CPU share.
There are lots of questions on how to get Docker applications into a production environment. This is the session to discussion the options along with pros and cons.
This session will start with the discussion of why we need a container orchestrator and will dive into and compare the top contenders which include:
Spock is a groovy based testing framework that leverages all the “best practices” of the last several years taking advantage of many of the development experience of the industry. So combine Junit, BDD, RSpec, Groovy and Vulcans… and you get Spock!
There are 3 tools I use on every Java project I control… this is one of them and with good reason.
This session assumes some understanding of testing and junit and builds on it. We will introduce and dig deep into Spock as a test specification and mocking tool. Topics include:
Unit testing
Data driven tests
Mocking and Stubbing
Partial Mocks
Spock Extensions
Gradle is the build tool of choice in the open source world, and rapidly becoming the standard in industry as well. Anyone who works with Gradle on a Java project knows the basics of the Java plugin and how to write simple tasks in Groovy. Gradle can do much more, however. This talk will demonstrate how to write your own custom task classes and how to create Gradle plugins from them. Other Gradle features will be demonstrated as well, including file manipulation, incremental builds, generating the Grade wrapper, and resolving conflicts in dependencies.
Gradle Inc also provides a free build scan capability to analyze build files. This too will be demonstrated, as well as profiling your build, determining dependencies, and more.
Kotlin is a practical language designed for the JVM. It focuses on pragmatism and safety, with an emphasis on interoperability and tool support. Kotlin is statically typed and includes null checks in the type system itself. The language runs anywhere Java does, from microservices to Android apps. This workshop will show how to use Kotlin to simplify anything you originally planned to do in Java.
Kotlin combines object-oriented programming with functional features like lambda expressions, functional types, and data classes. Since it comes from JetBrains, the company that produces the IntelliJ IDEA development environment, its tool support is excellent. IntelliJ even provides a Java-to-Kotlin converter to help you get started.
This presentation will cover the basic types, defining and calling functions, programming with lambdas, higher order functions, and DSLs. The goal is to give you enough background to be productive while you continue to learn and grow with the language.
Good discussions are supposed to diverge from their intended path. Free association is a feature, not a bug, and helps you see new connections between ideas. Without structure, however, it can be difficult to add context to new ideas and understand how they relate to more immediate problems. This talk discusses the technique of mental bookmarks – how to remember where you were when a discussion diverged. In addition to giving you a reputation for having an amazing memory, the skill also helps with personal awareness in general.
To give the technique context, we'll look at the fractal nature of success – the way we tend to see our current environment in relative terms, always comparing ourselves to those slightly more successful and slightly less successful.
Adoption of the Kotlin programming language has been most rapid among Android developers. Kotlin is ideally suited to Android development, because it is safe, fast, and optimized for small footprint environments. It is also officially supported by Google for Android development. This talk will review the features of Kotlin that work best for Android, from using the Android Studio IDE, to working with lambda expressions, using extension methods, and more.
We'll look at creating an android project with Kotlin support, using the Android APIs with Kotlin, using the Anko extension library, and co-routines for concurrency.
There are good reasons to use Angular and there are separate good reasons to use React. But, are there reasons to mix them? If so, why and in what way would we want to mix them.
This presentation will discuss the does and don'ts for using React with Angular and, using examples, illustrate how to carry out that task of using them together.
Reactive programming is no longer a fancy idea. It is not only gaining popularity and acceptance, it is being widely adopted in almost every level of software architecture.
In this presentation we will spend a short time reviewing what reactive programming is, the core characteristics, and then dive into different parts of a typical enterprise application and see how reactive programming is serving to improve the performance and scale.
Design patterns are common place in OO programming. With the introduction of lambda expressions in languages like Java, one has to wonder about their influence on design patterns.
In this presentation we will take up some of the common design patterns and rework them using lambda expressions. We will also explore some other patterns that are not so common, but are quite useful ways to apply lambdas.
Creating code is easy, creating good code takes a lot of time, effort, discipline, and commitment. The code we create are truly the manifestations of our designs. Creating a lightweight design can help make the code more extensible and reusable.
In this presentation we will take an example oriented approach to look at some core design principles that can help us create better design and more maintainable code.
To say the least, writing classes in JavaScript was a weird experience. It never felt right. Well the language has set it right finally. In this presentation we will learn about the different capabilities in JavaScript to do OO programming. Along the way we will dive into some advanced concepts as well.
Learn how to program with this powerful language that has matured significantly in the recent years.
This session covers the landscape of Big Data tools, technologies and best practices in 2018. You'll leave this session armed with the knowledge you need to build Big Data solutions by assembling the best technologies for you.
We cover the components of a big data pipeline, options available for each module and the pros, cons and best practices for each option.
As a software developer, you've likely come across at least one veteran in our field who has shared tales of the old days when they used punch cards to give instructions to a computer. Thankfully, those days are long gone. Over time, the way we interact with machines has evolved through various stages such as textual, graphical, and touch/gestural user interfaces…up to today where Siri, Alexa, and Google Home are ushering in a new era of voice user interfaces.
In this session, we'll focus on Alexa, the voice assistant present in Amazon's line of Echo devices. We'll explore the Alexa Skills Kit (ASK) and see how to construct voice UIs (known as “skills”) to do our bidding. We'll see how to back those skills using Spring, including Spring MVC and Spring Cloud Function. And maybe, one day, we'll tell future generations about how we once had to actually touch computers.
In this session, we'll explore the new reactive features in Spring 5 to build reactive, non-blocking applications using Spring's familiar programming model.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 has introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
Java is now on a six-month release schedule, with new features being added all the time. This talk summarizes the features that have been added to Java, including collection factory methods, private methods in interfaces, records, the enhanced switch statement, and more. The goal is to show not only what has been added, but why and where to use them.
Additional topics will include Local Variable Type Inference, the new HTTP client, the pattern matching, and text blocks. This talk will be updated throughout the year as more features are added to new releases.
Continuous delivery is not a pipe-dream technology, reserved only for the “cool kids” at hip tech startups. Although it's not easy, many concepts are within reach of most teams. That being said, it require more than simple technology changes. Attend this session to learn the fundamental concepts of CD, how to build your CD pipeline with Gradle and Jenkins, and recommendations on tools and best practices.
No prior knowledge is assumed and this talk will start from first principles.
Part one begins with a detailed overview of what CD is (and isn't) and how to build a business case for CD. Making both the technical case and business case for CD is vital as it's necessary to get the entire organization on board with the changes required.
Part two is a deeper dive into building a continuous delivery pipeline with Gradle and Jenkins (although the broader concepts can be applied to the tooling of your choice) You'll see how easily Gradle integrates with Java and how to leverage configuration management and gradle plugins to build all of your quality gates.
It happens to us all; there are simply days where it seems impossible to get anything done. This session focuses on techniques and tips to get into the zone, stay in the zone and to protect your productivity, even in disruptive environments.
Rather than focusing on any one productivity methodology (e.g. GTD) This talk analyzes the internal and external factors that affect our productivity and offers broader strategies to get back on track.