Using the Microservices Architectural Style to incrementally adopt an Event-driven Architecture (EDA) lowers up-front costs while decreasing time-to-market. EDA extracts value from existing occurrences, limiting invasive refactoring or disrupting existing application development efforts. Implementing Event-driven Microservices yields intelligence, scalable, extensible, reactive endpoints.
This session will cover the fundamentals, patterns, techniques and pitfalls of Event-driven Microservices with several demos leveraging Spring-Boot, Camel, ActiveMQ and Docker.
No matter the techniques used to make enterprise solutions Highly Available (HA), failure is inevitable at some point. Resiliency refers to how quickly a system reacts to and recovers from such failures. This presentation discusses various architectural resiliency techniques and patterns that help increase Mean Time to Failure (MTTF), also known as Fault Tolerance, and decrease Mean Time to Recovery (MTTR).
Failure of Highly Available (HA) enterprise solutions is inevitable. However, in today's highly interconnected global economy, uptime is crucial. The impact of downtime is amplified when considering Service Level Agreement (SLA) penalties and lost revenue. Even more damaging is the harm to an organization's reputation as frustrated customers express their grievances on social media. Resiliency, often overlooked in favor of availability, is essential. Prezi Presentation
Software architecture involves inherent trade-offs. Some of these trade-offs are clear, such as performance versus security or availability versus consistency, while others are more subtle, like resiliency versus affordability. This presentation will discuss various architectural trade-offs and strategies for managing them.
The role of a technical lead or software architect is to design software that fulfills the stakeholders' vision. However, as the design progresses, conflicting requirements often arise, affecting the candidate architecture. Resolving these conflicts typically involves making architectural trade-offs (e.g. service granularity vs maintainability). Additionally, with time-to-market pressures and the need to do more with less, adopting comprehensive frameworks like TOGAF or lengthy processes like ATAM may not be feasible. Therefore, it is crucial to deeply understand these architectural trade-offs and employ lightweight resolution techniques. Prezi Presentation
Everyone (including your humble speaker) talks about Modularization. For once, let's focus on other interesting and exciting parts of Java 9 that does not involve modularization. In this presentation we will talk about key improvements and additions to the Java language and the JDK introduced in Java 9.
We will focus on first things that can be done better in Java 9 compared to previous versions on Java.
Then we will look at things we simply could not do before but we can benefit from now.
And finally, we will not talk about Modularization, well I lied, will discuss that just a bit.
Reactive Programming is no longer an esoteric idea. Many libraries and tools have started providing excellent support for this idea of programming. So much that Java 9 has embraced this idea by including it into the specification. In this presentation we will take a dive into what reactive programming looks like from the Java specification point of view.
Reactive Programming
What problem does it solve
How to use it from Java point of view?
What can you do to make good use of it.
After years of stagnation, Java is one of the most vibrant languages of current times. Java 8 saw several improvements and Java 9 a few more. Well, that's only the beginning. Come to this talk to learn about where Java is heading, what's brewing in the language and the ecosystem, well beyond Java 9.
We will look ahead into the next versions of Java, what you can expect, and how you can get ready.
In this example-driven presentation, you'll learn how to leverage Spring Boot to accelerate application development, enabling you to focus coding on logic that drives application requirements with little concern for code that satisfies Spring's needs.
For over a decade, Spring has sought to make enterprise Java development easier. It began by offering a lighter alternative to EJBs, but continued to to address things such as security, working with various sorts of databases, cloud-native applications, and reactive programming. And, along the way, Spring even took steps to make itself easier to use, offering Java-based and automatic component configuration. Even so, there's still a lot of near-boilerplate code required to develop Spring applications.
Enter Spring Boot. Spring Boot's primary purpose is to make Spring easier to work with. It achieves this in three ways:
All together, Spring Boot lets you focus on fulfilling your application's requirements without worrying about writing code that satisfies the needs of a framework.
In this session, you'll learn how to take your Spring Boot skills to the next level, applying the latest features of Spring Boot. Topics may include Spring Boot DevTools, configuration properties and profiles, customizing the Actuator, and crafting your own starters and auto-configuration.
TBD
In this sessions we'll see how to use components of Spring Cloud to configure and discover microservices in a cloud native microservice-architected application. Specifically, we'll look at Spring Cloud Netflix and Netflix' Eureka and Ribbon projects for service discovery as well as the Spring Cloud Config project's configuration server to provide a central point of external configuration.
Microservice-architecture presents many benefits, not the least of which include opportunities to develop, evolve, scale, test, and deploy individual modules of an application independently from the rest of the application. But once an application is broken apart, a new set of challenges arises.
If the individual microservices are to work together in service of the greater application, then they must know about each other. Rather than hardcode the specifics of each microservice within every other microservice, it's better to provide a registry with which a service can register and other services may use to lookup those service details.
Microservices, like any application, will also likely depend on application-level configuration. It's generally a good idea to keep that configuration separate from the application itself, and this is even more true with microservices. Using a centralized configuration service, microservices can obtain their configuration details, some of which may be shared across many microservices. Because it is separate from the application/microservice code, that configuration itself can be versioned and evolve at a pace independent of the microservices it serves.
In this session, we'll look at how to monitor the health of individual services in a microservice-architected application as well as trace the path of data and processing as it flows from service to service. Specifically, we'll look at Spring Cloud Netflix and Netflix OSS' Hystrix project to not only apply the circuit breaker pattern for failure and latency protection to microservices and monitor the health of those circuit breakers in a running application. Additionally, we'll look at Spring Cloud Sleuth and Zipkin to trace processing through a microservice-based application.
Microservice-architecture presents many benefits, not the least of which include opportunities to develop, evolve, scale, test, and deploy individual modules of an application independently from the rest of the application. But once an application is broken apart, a new set of challenges arises.
Failure is always a possibility in any application. But when an application is composed of microservices, it's important for that each service contain its failure and keep its problems to itself to avoid cascading errors to other services. Likewise, latency can be a concern that is compounded when felt across multiple microservices and it's important to avoid cascading latency issues. By applying the circuit breaker pattern in your microservices, you can ensure that if anything goes wrong in a given service, it stays in that service and doesn't have any negative impacts to other services. Moreover, metrics published by those circuit breakers can provide helpful insight into the health of an application.
When things do go wrong, it may also be helpful to trace the flow and processing of information as it passes from one microservice to another. Traditionally, tracing involves following the logs of a single application. But in a microservice-based application where many services and many instances of those services are involved, tracing is much more challenging. What's needed is a way for microservices to submit timing and tracing information to a central tracing service that coordinates that data and can present the flow of data in a single, easily digested view.
In this session, we'll see how to develop microservices that are components of a flow of data, but that do not necessarily involve REST. We'll also see how to develop ephemeral microservices–microservice that are triggered to perform a job, then shutdown when that job is complete. Specifically, we'll look at Spring Cloud Stream for flow-based microservices, Spring Cloud Task for ephemeral microservices, and Spring Cloud Data Flow which orchestrates those stream- and task-based services.
Microservice-architecture presents many benefits, not the least of which include opportunities to develop, evolve, scale, test, and deploy individual modules of an application independently from the rest of the application. But once an application is broken apart, a new set of challenges arises.
Microservices are commonly thought of as small REST-based services that are assembled to form a larger, more complete application. In reality, however, REST is only the communication mechanism which is only a implementation detail and not intrinsic to the notion of microservices. Meanwhile, data processing and integration between various components of an application and external services is a key factor of many applications. In cloud native applications, this kind of data flow and processing is still relevant.
Serverless is a big topic these days. More specifically, Function as a Service (FaaS) enables small-scale functions deployed in the cloud to be enabled only as needed. Compared to Platform as a Service (PaaS) services which are “always on” and require explicit scaling, FaaS services wake up when needed and are scaled by the platform automatically.
Imagine a service that only gets invoked one per hour. In PaaS, that service is always running and ready to handle requests, consuming processor resources and ultimately costing more to be ready at a moment's notice. In contrast, FaaS services awaken quickly upon invocation and go away when finished, conserving resources and cost.
In this example-driven session, we'll see how to create and deploy Function as a Service (FaaS) services with Spring and Spring Boot.
In this session, we'll explore the new reactive features in Spring 5 to build reactive, non-blocking applications using Spring's familiar programming model.
Traditionally, applications have been built using a blocking, synchronous model. Although comfortable and intuitive for most programmers, this model doesn't scale well. And although there are several new approaches to reactive programming, they don't necessarily fit into the familiar programming model that Spring developers are accustomed to working with.
Spring 5 has introduced a set of new reactive features, enabling non-blocking, asynchronous code that scales well using minimal threads. Moreover, it builds on the same concepts and programming models that Spring developers have used for years.
An integral part to any DevOps effort involves automation. No longer do we wish to manage tens, hundreds or even thousands of servers by hand, even if that were possible. What we need is a programmatic way to create and configure servers, be those for local development, all the way to production.
This is where tools like Ansible come into play. Ansible offers us a way to define what our server configurations are to look like using plain-text, version-controlled configuration files.
Not only does this help with avoiding “snow-flakes”, but it promotes server configuration to participate in the SDLC, pulling server configuration closer to the developers.
In this session we will explore what Ansible has to offer, decipher the Ansible terminology, and run some examples to configure a local server.
Ansible, like Git, aims to be a simple tool.
The benefit here is that the level of abstraction that Ansible offers is paper-thin, with no complicated workflows, or opinions enforced by the tool itself.
The downside is that without a prescribed approach to Ansible, developing your playbooks often becomes a case of trial-and-error.
As engineers steeped in the DevOps mindset we must be able to use the tool effectively, allowing us to accelerate and shorten the lead time from development to production.
In this session we will take a look at some lessons learned when working with Ansible. Topics covered:
We developers really like code.
Code, being plain-text, can be version-controlled, versioned, and follow a traditional SDLC lifecycle.
For the longest time however, we were forced to live with having most of our Ci/Cd and server configurations live outside of our codebases, often at the mercy of infrastructure/operations teams.
With the evolution of DevOps comes the notions of constructs like IaaC (Infrastructure-As-A-Code), and with Jenkins 2.0, we can now manage our Jenkins jobs configurations as code!
In this session we will explore the concept of “Pipelines-As-A-Code”, including the DSL that Jenkins offers, and how we can use this to configure Jenkins jobs via simple, version-controlled Jenkins files. We will see how we can create Jenkins jobs by autodiscovering repositories, as well as when we branch our code to create releases.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single metaanswer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several realworld patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
In this session you will learn to strategically introduce technology innovations by applying specific change patterns to groups of individuals. Using these patterns and related techniques will not only benefit your organization but will ultimately benefit your career as a technologist by making you a better influencer, writer, and speaker.
The rapid pace of technological innovation has enabled many organizations to dramatically increase productivity while at the same time decrease their overall headcount. However, the vacillating global economy combined with “change fatigue” within organizations has resulted in a risk averse culture. In such an environment how can one possibly introduce and inculcate the latest technology or process within an organization? The answer is to have a solid understanding of Diffusion Theory and to leverage Patterns of Change.
Prezi Location: http://prezi.com/b85wwmw7hccn
React is a very targeted library for creating rich front end. While React does fairly well what it intends to provide, developing a practical application requires more than using React alone.
In this presentation we will build the front end for a small application that pulls data from a backend and along the way learn how to use React and related libraries to achieve the desired results.
In this presentation, we will dig deeper into creating applications with React.
We will take a deeper look into creating components, dealing with life cycle events, managing state, when and how to use Redux, and how to approach automated testing of the front-end code you create.
JavaScript is no longer that scary nasty language. There are so many wonderful features in the modern versions of this language that I can't stop praising it. In this presentation we will focus on some of the most exciting features of the language that make programming in JavaScript a pleasant experience.
We will dive into beautiful parts and features from ES 6, ES 7, and ES 8.
To say the least, writing classes in JavaScript was a weird experience. It never felt right. Well the language has set it right finally. In this presentation we will learn about the different capabilities in JavaScript to do OO programming. Along the way we will dive into some advanced concepts as well.
Learn how to program with this powerful language that has matured significantly in the recent years.
You have some modular code with a REST API. You are on your way to Microservices. Next, you package it in a container image that others can run. Simple. Now what? Your service needs to log information, needs to scale and load balance between its clones. Your service needs environment and metadata way outside its context. What about where the service will run? Who starts it? What monitors its health? What about antifragility? Updates? Networking? Oh my.
Don't get flustered. We will explore how Kubernetes simplifies the complexity of distributed computing.
This session will help you understand the terms, architecture and the mechanics of the Kubernetes tools. You will understand how to target your applications to a seemingly complex distributed compute platform.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts (1 of 2)
Aha moments with apps in containers can be quite liberating. The mobile space is saturated with “there's an app for that”. For us, we now expect “there's a container for that”. “Write once, run anywhere” (WORA) has changed to “Package once, run anywhere” (PORA). The growing community of containers is riding up the hype curve. We will look at many ways to assemble pods using architecture patterns you already know.
Your software package delivery and installation is no longer an rpm, deb, dmg, jar, war, native executable or a run script, it is simply an image that has a common run container command.
During the presentation, we will explore some examples on Katacoda.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts
At the 2009 Agile conference, J.B.Rainsberger declared “Integration tests are a scam”. I agree. Come see some compelling reasons why consumer-driven contract testing is a much better approach. Particularly for microservices.
We will explore different testing techniques on Kubernetes, including an important one called “Consumer-Driven Contracts”.
After a brief overview of the concepts a live demonstration will show you how to:
This is the droid you are looking for. Within this droid are hundreds of rules designed to review your code for defects, hotspots and security weaknesses. Consider the resulting analysis as humble feedback from a personal advisor. The rules come from your community of peers, all designed to save your butt.
We will explore techniques on how to add these checks to your IDE, your build scripts and your build pipelines.
Too much chatter in your pull requests? See how the analysis tools teach best practices, without ego or criticism, to a spectrum of developers. As a leader see how to develop an effective code quality intern program around this technique. We will also see some techniques to use Kubernetes to obtain reports and dashboards right on your local machine and from your continuous integration pipeline.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
From operating system on bare metal, to virtual machines on hypervisors, to containers orchestration platforms. How we run our code and bundle our applications continues to evolve. Serverless computing continuous our evolutionary path for our architectures.
Kubernetes provides an ideal, vendor-agnostic platform for serverless computing. Serverless promises to developers that we can worry less about the cluster and focus more on their logic. Based on your Kubernetes knowledge we will discover the various contributors of serverless frameworks on Kubernetes. Specifically, we will unpack how two open source serverless frameworks, Kubeless and OpenFaaS, leverage Kubernetes to achieve their promises. We will explore how Knative is helping the serverless providers evolve to the next levels sophistication.
Prerequisite: If you are unfamiliar with Kubernetes be sure to attend: Kubernetes Koncepts.
Kubernetes is a powerful platform for running containers and distributing computation workloads across resources. A significant question is how do you get all your code to this platform, continuously.
In 2019 our community is bursting with new solutions to assist our delivery pipelines. While Jenkins is a dominant player, there is a growing array of new ideas and choices. From coding at your laptop to building containers to deployments, we will explore the various tools and techniques to reduce the delivery frictions.
Kubernetes is also a fitting platform for hosting your continuous tools, pipeline engines, registries, testing, code analysis, security scans, and delivery workflows.
From this session, you will understand the latest tools and techniques for pipelining on Kubernetes. Let's up the game on your Maturity Model.
Creating code is easy, creating good code takes a lot of time, effort, discipline, and commitment. The code we create are truly the manifestations of our designs. Creating a lightweight design can help make the code more extensible and reusable.
In this presentation we will take an example oriented approach to look at some core design principles that can help us create better design and more maintainable code.
In this session we will get acquainted with Docker. We will discuss what docker is, how to install it, and how to start using Docker. We will also explore some of the benefits of containerizing your applications.
Containers are taking over the world. Containers provide a means to have hermatic builds of your software, allowing for truly immutable testing, and delivery of your software. Docker is one of many containerization technologies, and in this session we will take a brief look at Docker and what it has to offer.
In this session we will dive deeper into Dockerfiles. We will explore the DSL that Dockerfiles provide to allow for the automation of image creation.
Dockerfiles provide a means to automate the creation of images, and consequently the containers within which our applications run. The Dockerfile, though minimal, provides us with everything we need to package our software, and enable it to run. In this session we will dive deep into the Docker DSL, and explore the many commands that it provides, and along the way explore some differences between similar commands, and some gotchas.
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
Good discussions are supposed to diverge from their intended path. Free association is a feature, not a bug, and helps you see new connections between ideas. Without structure, however, it can be difficult to add context to new ideas and understand how they relate to more immediate problems. This talk discusses the technique of mental bookmarks – how to remember where you were when a discussion diverged. In addition to giving you a reputation for having an amazing memory, the skill also helps with personal awareness in general.
To give the technique context, we'll look at the fractal nature of success – the way we tend to see our current environment in relative terms, always comparing ourselves to those slightly more successful and slightly less successful.
Unlock your latent photographic memory. In this session you'll learn failsafe techniques and systems that allow you to never forget names, appointments, or numbers. In the process you'll be more effective and imaginative at work; improve reading speed and comprehension, and shorten study times.
An improved memory will change your life, literally. In the session we will describe in detail several memory techniques that, with a little practice, will have you remembering virtually anything you want.
Kotlin is a practical language designed for the JVM. It focuses on pragmatism and safety, with an emphasis on interoperability and tool support. Kotlin is statically typed and includes null checks in the type system itself. The language runs anywhere Java does, from microservices to Android apps. This workshop will show how to use Kotlin to simplify anything you originally planned to do in Java.
Kotlin combines object-oriented programming with functional features like lambda expressions, functional types, and data classes. Since it comes from JetBrains, the company that produces the IntelliJ IDEA development environment, its tool support is excellent. IntelliJ even provides a Java-to-Kotlin converter to help you get started.
This presentation will cover the basic types, defining and calling functions, programming with lambdas, higher order functions, and DSLs. The goal is to give you enough background to be productive while you continue to learn and grow with the language.
Gradle is the build tool of choice in the open source world, and rapidly becoming the standard in industry as well. Anyone who works with Gradle on a Java project knows the basics of the Java plugin and how to write simple tasks in Groovy. Gradle can do much more, however. This talk will demonstrate how to write your own custom task classes and how to create Gradle plugins from them. Other Gradle features will be demonstrated as well, including file manipulation, incremental builds, generating the Grade wrapper, and resolving conflicts in dependencies.
Gradle Inc also provides a free build scan capability to analyze build files. This too will be demonstrated, as well as profiling your build, determining dependencies, and more.
JavaScript will celebrate it's 24th birthday in 2020. For a language that has been around for such a while it has seen very few, if any changes to the language itself. Well all that is about to change with ECMAScript.next (or ECMAScript 6). ECMAScript 6 modernizes JavaScript syntax, while bringing in features such as modules for better namespacing, class as a first class construct, and a variety of additional operators thus ensuring that JavaScript is ready for the next era of large scale modern web applications. ES 7, 8, 9 and now 10 all use the features introduced by ES6 to further the language.
In this session we will take a look at some of the features that ECMAScript 6 / 7 / 8 / 9 and 10 bring to the table. We will take an exploratory approach, and by the end of 3 hours, you will be well versed with ALL of the new features in JavaScript.
JavaScript will celebrate it's 24th birthday in 2020. For a language that has been around for such a while it has seen very few, if any changes to the language itself. Well all that is about to change with ECMAScript.next (or ECMAScript 6). ECMAScript 6 modernizes JavaScript syntax, while bringing in features such as modules for better namespacing, class as a first class construct, and a variety of additional operators thus ensuring that JavaScript is ready for the next era of large scale modern web applications. ES 7, 8, 9 and now 10 all use the features introduced by ES6 to further the language.
In this session we will take a look at some of the features that ECMAScript 6 / 7 / 8 / 9 and 10 bring to the table. We will take an exploratory approach, and by the end of 3 hours, you will be well versed with ALL of the new features in JavaScript.
Design patterns are common place in OO programming. With the introduction of lambda expressions in languages like Java, one has to wonder about their influence on design patterns.
In this presentation we will take up some of the common design patterns and rework them using lambda expressions. We will also explore some other patterns that are not so common, but are quite useful ways to apply lambdas.
Interest in MongoDB and other NoSQL platforms has waxed and waned over the years, however, Mongo remains an enormously useful tool.
In this session, you will learn everything you need to know to master MongoDB.
We dive deep into advanced topics, data architecture, tooling options, clustering, replication and sharding. You'll learn when Mongo is the perfect tool for the job (and when it isn't) and what's new in 2018
Big Data is exploding into our industry and these skills are increasingly in demand. Much of our existing body of knowledge needs to be updated, however as Big Data solutions present a number of unique challenges. In this half-day workshop we start with the question “What is Big Data?” and “How do big data solutions differ from traditional data architectures.” We then dive into core Hadoop technologies to tackle your Big Data challenges and dive into several tools and technologies for distributed processing of large datasets.
The Big Data Space is rapidly changing and with a seemingly dizzying number of technology choices, one might not know where to begin. Our goal with this half-day workshop is to give you an overview of a number of technologies, their respective strengths and weaknesses, and enough knowledge to make informed decisions and built great tech!
Big Data is exploding into our industry and these skills are increasingly in demand. Much of our existing body of knowledge needs to be updated, however as Big Data solutions present a number of unique challenges. In this half-day workshop we start with the question “What is Big Data?” and “How do big data solutions differ from traditional data architectures.” We then dive into core Hadoop technologies to tackle your Big Data challenges and dive into several tools and technologies for distributed processing of large datasets.
The Big Data Space is rapidly changing and with a seemingly dizzying number of technology choices, one might not know where to begin. Our goal with this half-day workshop is to give you an overview of a number of technologies, their respective strengths and weaknesses, and enough knowledge to make informed decisions and built great tech!
This is a revised and updated version of the previous talk, with current thinking from practice and the literature. The talk presents why conflicts with your manager are inevitable based on differences in priorities and perspectives, and how to plan for them. The goal is to show you how to build the loyalty relationship that allows you to get what you need when you need it.
Topics covered will include diagnosing communication styles, lessons from game theory, working within the organizational hierarchy, and lessons on how to build a relationship with your manager that still allows you the freedom to express yourself and what you really want.
We all have seen our share of bad code and some really good code as well. What are some of the common anti patterns that seem to be recurring over and over in code that sucks? By learning about these code smells and avoiding them, we can greatly help make our code better.
Come to this talk to learn about some common code smell and to share your experiences as well.
This session covers the landscape of Big Data tools, technologies and best practices in 2018. You'll leave this session armed with the knowledge you need to build Big Data solutions by assembling the best technologies for you.
We cover the components of a big data pipeline, options available for each module and the pros, cons and best practices for each option.
Reactive programming is no longer a fancy idea. It is not only gaining popularity and acceptance, it is being widely adopted in almost every level of software architecture.
In this presentation we will spend a short time reviewing what reactive programming is, the core characteristics, and then dive into different parts of a typical enterprise application and see how reactive programming is serving to improve the performance and scale.
Prerequisite: If you are unfamiliar with Kubernetes or Istio meshing be sure to attend: Understanding Kubernetes: Fundamentals or Understanding Kubernetes: Meshing Around with Istio.
Kubernetes is a complex container management system. Your application running in containers is also a complex system as it embraces the distributed architecture of highly modular and cohesive services. As these containers run, things may not always behave as smoothly as you hope. Embracing the notions of antifragility and designing a system to be resilient despite the realities of resource limitations, network failures, hardware failures and failed software logic. All of this demands a robust monitoring system to open views into the behaviors and health of your applications running in a cluster.
Three important aspects to observe are log streams, tracing, and metrics.
In this session, we look at some example microservices running in containers on Kubernetes. We add Istio to the cluster for meshing. We observe how logs are gathered, We see transactions are traced and measured between services. We inspect metrics and finally add alerts when metrics are indicating a problem.
Java is now on a six-month release schedule, with new features being added all the time. This talk summarizes the features that have been added to Java, including collection factory methods, private methods in interfaces, records, the enhanced switch statement, and more. The goal is to show not only what has been added, but why and where to use them.
Additional topics will include Local Variable Type Inference, the new HTTP client, the pattern matching, and text blocks. This talk will be updated throughout the year as more features are added to new releases.
The Gradle build tool is one of the most successful projects in the Groovy ecosystem because it addresses a difficult problem – every major build is a custom build. Gradle builds are written in Groovy, so the full power of the language is available if you need it. Gradle supports Maven project structure and repositories and uses Ivy dependency management without being bound by their normal constraints. With major systems like Grails, Hibernate, and the Spring Framework moving to Gradle, this is a technology worth taking the time to understand.
This talk will cover the basics of Gradle both through simple examples and by examining the build files for major open source projects.
There are currently over two billion Android devices in the world, ranging from phones to tablets to wearables to automotive apps. Each of them supports development using the open source Android operating system. This talk will show how to create an Android project, work with Activities and Intents, and work with the underlying Gradle build system.
The talk will work with the current version of Android Studio and include tips and tricks on its use.
Building on Part 1, this talk will add REST capabilities, discuss how to work with SQL databases, and perform asynchronous methods in Android.
Code will make use of popular Android libraries, like ButterKnife, Retrofit, OkHttp, and Room.
Adoption of the Kotlin programming language has been most rapid among Android developers. Kotlin is ideally suited to Android development, because it is safe, fast, and optimized for small footprint environments. It is also officially supported by Google for Android development. This talk will review the features of Kotlin that work best for Android, from using the Android Studio IDE, to working with lambda expressions, using extension methods, and more.
We'll look at creating an android project with Kotlin support, using the Android APIs with Kotlin, using the Anko extension library, and co-routines for concurrency.
It happens to us all; there are simply days where it seems impossible to get anything done. This session focuses on techniques and tips to get into the zone, stay in the zone and to protect your productivity, even in disruptive environments.
Rather than focusing on any one productivity methodology (e.g. GTD) This talk analyzes the internal and external factors that affect our productivity and offers broader strategies to get back on track.
Machine Learning is a huge, deep field. Come get a head start on how you can learn about how machines learn.
This talk will be an overview of the Machine Learning field. We’ll cover the various tools and techniques that are available to you to solve complex, data-driven problems. We’ll walk through the algorithms and apply them to some real but accessible problems so you can see them at work.
Documents contain a lot of information. We'll introduce you to a variety of techniques to extract them.
Machine Learning techniques are useful for analyzing numeric data, but they can also be useful for classifying text, extracting content and more. We will discuss a variety of open source tools for extracting the content, identifying elements and structure and analyzing the text can be used in distributed, microservice-friendly ways.
While the Web itself has strong decentralized aspects to how it is used, the backend technologies are largely centralized. The naming systems, the routing systems and the traffic that all points back to the same place for a website are all centralized technologies. This creates both a liability as well as a control point.
In order to break free of some of these limitations, new technologies are emerging to provide a more decentralized approach to the Web.
This talk will walk you through some emerging technology to provide decentralized content storage and distribution, edge computing and more. We will touch upon the Interplanetary Filesystem, WebTorrent, Blockchain spin offs and more.
What happens if web applications got really fast?
We are increasingly able to do more in the browser because of faster networks, optimized JavaScript engines, new standard APIs and more. There is a new initiative to allow a binary format called WebAssembly that will provide a compiled, cross-platform representation that will take us to the next level. Complex business applications and 3D video games will alike will benefit from this new standard. Come hear about what it can do for you.
Bitcoin has roundly entered the public consciousness, but it is limited in its use beyond the specific constraints of the cryptocurrency. Ethereum is a new platform that has enabled developers to innovate in creating their own cryptocurrencies, platforms, smart contracts and more.
This talk will introduce the larger concepts of blockchains and decentralized applications as well as details on how to build running applications on the Ethereum platform.
These ideas and tools will help innovators disrupt organizations, markets, entire industries and even aspects of society. It's sounds like science fiction, but these thing are already happening. Come learn how.
We will cover:
Like all languages, Groovy code can be written in many different ways. This talk will review how to write Groovy that follows the most commonly accepted idioms and practices, especially in light of the changes in Java 8. Features demonstrates will include closures, class and method definitions that best accommodate Java integration, testing techniques, and more.
Groovy is, relatively speaking, a mature language on the JVM. Adding it to existing Java systems is easy. Following the recommended practices helps make maintenance and extensibility practical.
Redux has fast become one the pillars for state management in the modern era of web application developments. Though tiny, it packs a punch, and in this session we will explore the principles behind redux and see how to use it in our web applications.
TBD
In this session, you'll learn how to take advantage of the latest features of Spring Security and Spring Boot to secure an application, both at the web layer and at the method-level.
Information is probably the most valuable item we now have; crooks are looking for ways to steal our data and identities by sneaking into unsecured applications. As software developers, we must take steps to protect the information that resides in our applications. Whether it’s an email account protected with a username/password pair or a brokerage account protected with a trading PIN, security is a crucial aspect of most applications.
Spring Security is an extension to the Spring Framework that provides security to Spring applications both at the HTTP request level as well as at method invocation.
Enterprise Big Data Cloud architect/developer need to use Big data solutions deployed in Cloud for reliability, scalability, agility, availability, and security. There are no comprehensive end-to-end hands on experience for Developers and Enterprise architects have to be able to design Big data solutions which can maximize for linear scalability and high availability. Why? Big data Analytics use-cases require sub-second response time with at least 99.99 % availability using massively parallel processing.
We will start with Apache Cassandra. Apache Cassandra is one of the best solutions for storing data for high availability and scalability. Modeling strategies help in designing solutions based on the application flow. Later, we will explore Cassandra time series data modeling.
Big Data applications require a faster speed of data processing and analysis. We will study topology with examples for building Spark applications. We will deep dive into Apache Spark applications, which connect into Cassandra. Spark streaming is used for processing real-time data.
We will explore Spark, Mesos, Akka, Cassandra and Kafka (SMACK) stack. We will look at how SMACK stack is helping build massively scalable applications in the cloud. Mesos is distributed systems kernel that can run on every machine and provides applications like Hadoop, Spark, Kafka, Elastic Search with APIs for resource management and scheduling across entire datacenter and cloud environments. Mesos help with scalability to 10,000 nodes and is fault-tolerant replicated master slave using ZooKeeper. Akka is a toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM. Kafka is a distributed publish-subscribe messaging system that is designed to be fast, scalable, and durable.
The objective for this talk is to give clear direction on how to create scalable cloud ready Big Data applications with practical recopies to solve common problems. Building Elastic, Resilient, Scalable, Multi-workload, Efficient, and Isolated architecture using SMACK stack.
Enterprise Big Data Cloud architect/developer need to use Big data solutions deployed in Cloud for reliability, scalability, agility, availability, and security. There are no comprehensive end-to-end hands on experience for Developers and Enterprise architects have to be able to design Big data solutions which can maximize for linear scalability and high availability. Why? Big data Analytics use-cases require sub-second response time with at least 99.99 % availability using massively parallel processing.
We will start with Apache Cassandra. Apache Cassandra is one of the best solutions for storing data for high availability and scalability. Modeling strategies help in designing solutions based on the application flow. Later, we will explore Cassandra time series data modeling.
Big Data applications require a faster speed of data processing and analysis. We will study topology with examples for building Spark applications. We will deep dive into Apache Spark applications, which connect into Cassandra. Spark streaming is used for processing real-time data.
We will explore Spark, Mesos, Akka, Cassandra and Kafka (SMACK) stack. We will look at how SMACK stack is helping build massively scalable applications in the cloud. Mesos is distributed systems kernel that can run on every machine and provides applications like Hadoop, Spark, Kafka, Elastic Search with APIs for resource management and scheduling across entire datacenter and cloud environments. Mesos help with scalability to 10,000 nodes and is fault-tolerant replicated master slave using ZooKeeper. Akka is a toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM. Kafka is a distributed publish-subscribe messaging system that is designed to be fast, scalable, and durable.
The objective for this talk is to give clear direction on how to create scalable cloud ready Big Data applications with practical recopies to solve common problems. Building Elastic, Resilient, Scalable, Multi-workload, Efficient, and Isolated architecture using SMACK stack.
Enterprise Big Data Cloud architect/developer need to use Big data solutions deployed in Cloud for reliability, scalability, agility, availability, and security. There are no comprehensive end-to-end hands on experience for Developers and Enterprise architects have to be able to design Big data solutions which can maximize for linear scalability and high availability. Why? Big data Analytics use-cases require sub-second response time with at least 99.99 % availability using massively parallel processing.
We will start with Apache Cassandra. Apache Cassandra is one of the best solutions for storing data for high availability and scalability. Modeling strategies help in designing solutions based on the application flow. Later, we will explore Cassandra time series data modeling.
Big Data applications require a faster speed of data processing and analysis. We will study topology with examples for building Spark applications. We will deep dive into Apache Spark applications, which connect into Cassandra. Spark streaming is used for processing real-time data.
We will explore Spark, Mesos, Akka, Cassandra and Kafka (SMACK) stack. We will look at how SMACK stack is helping build massively scalable applications in the cloud. Mesos is distributed systems kernel that can run on every machine and provides applications like Hadoop, Spark, Kafka, Elastic Search with APIs for resource management and scheduling across entire datacenter and cloud environments. Mesos help with scalability to 10,000 nodes and is fault-tolerant replicated master slave using ZooKeeper. Akka is a toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM. Kafka is a distributed publish-subscribe messaging system that is designed to be fast, scalable, and durable.
The objective for this talk is to give clear direction on how to create scalable cloud ready Big Data applications with practical recopies to solve common problems. Building Elastic, Resilient, Scalable, Multi-workload, Efficient, and Isolated architecture using SMACK stack.
Enterprise Big Data Cloud architect/developer need to use Big data solutions deployed in Cloud for reliability, scalability, agility, availability, and security. There are no comprehensive end-to-end hands on experience for Developers and Enterprise architects have to be able to design Big data solutions which can maximize for linear scalability and high availability. Why? Big data Analytics use-cases require sub-second response time with at least 99.99 % availability using massively parallel processing.
We will start with Apache Cassandra. Apache Cassandra is one of the best solutions for storing data for high availability and scalability. Modeling strategies help in designing solutions based on the application flow. Later, we will explore Cassandra time series data modeling.
Big Data applications require a faster speed of data processing and analysis. We will study topology with examples for building Spark applications. We will deep dive into Apache Spark applications, which connect into Cassandra. Spark streaming is used for processing real-time data.
We will explore Spark, Mesos, Akka, Cassandra and Kafka (SMACK) stack. We will look at how SMACK stack is helping build massively scalable applications in the cloud. Mesos is distributed systems kernel that can run on every machine and provides applications like Hadoop, Spark, Kafka, Elastic Search with APIs for resource management and scheduling across entire datacenter and cloud environments. Mesos help with scalability to 10,000 nodes and is fault-tolerant replicated master slave using ZooKeeper. Akka is a toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM. Kafka is a distributed publish-subscribe messaging system that is designed to be fast, scalable, and durable.
The objective for this talk is to give clear direction on how to create scalable cloud ready Big Data applications with practical recopies to solve common problems. Building Elastic, Resilient, Scalable, Multi-workload, Efficient, and Isolated architecture using SMACK stack.
We live, and operate in a world where our services do not need to be available 24/7, but also respond to surges in demand, and scale down when demands are less.
Reactive systems, and reactive architectures have arisen to address this very concern.
Vert.x is a ployglot toolkit that makes writing reactive applications on the JVM possible.
With a non-blocking, event driven architecture, Vert.x can help you scale your application, leveraging the multi-threaded nature of the JVM, and the multi-core abilities of your hardware.
In this session we will take a look at Vert.x, it's programming model and architecture.
We will write some code, and explore the ecosystem around Vert.x, as well as discuss some of the gotchas that you might come across as you develop Vert.x applications.
We constantly hear that change should be affordable and cost effective. True, but, in reality, that's easily said than done. Complexity makes change hard. We can't shy away from the hard problems posed by domains and business needs. So, how can we solve complicated problems without getting dragged into the quagmire of what appears to be an inevitable complexity? In this keynote, an award winning author and software practitioner will share experiences and observations from working on multiple software projects, about what leads to complexities, the traps developers and organizations fall into, and what we can do to effectively deal with these common, recurring issues we see across domains and products.
.