Modularity is coming to the Java platform! Java 9 will introduce the Jigsaw module system. OSGi is here today. But don’t wait to start designing modular software. Contrary to popular belief, you don't need a framework or a new runtime to start building modular software applications. You can start today. Learn how!
In this session, we'll examine what it means to develop modular software on the Java platform. We'll examine the goals and benefits of modular software, and explore the patterns of modular architecture that help us develop modular software systems. With just a few easy steps, we'll see how to transform our software from a huge monolith to an extensible system of collaborating software modules. By examining an existing software system, we'll see first hand how we can increase software modularity with minimal disruption. You'll walk away not just with a much deeper understanding of the benefits of modular software, but also a migration roadmap for refactoring existing applications to increase their modularity. In other words, you'll see how to get ready today for the application platform of tomorrow.
OSGi is the dynamic module system for the Java platform. Today, OSGi is a major part of most application platforms, tools, and is supported by many major frameworks. In this session, we'll explore the fundamental underpinnings of OSGi, explore the OSGi ecosystem, and clearly articulate the benefits of OSGi.
After a gentle introduction to OSGi, we'll explore how OSGi allows us to overcome classpath hell and design very adaptable and flexible software systems. Through several coding examples, we'll explore numerous OSGi techniques, including versioning, hot deployment, isolation, lifecycle, and micro-services.
Despite our wishing it were so, software architecture is not static throughout a project. Yet, we make many of our software architecture decisions early in the software development lifecycle. In today’s volatile technology and business climate, big architecture up front is not sustainable. In this session, we will explore several techniques that help us create more flexible and adaptable software systems. But first, we’ll expose the true essence of what’s meant when we say “architectural agility.”
What’s the goal of architecture? To serve as a blueprint of the system? Possess the flexibility to evolve as new requirements emerge? To satisfy the architectural qualities, including performance, security, availability, reliability, and scalability? Yes. Yes. Yes. But how do we create software architectures that achieves all of these goals? And how do we ensure no disconnect occurs between developers responsible for implementation and architects responsible for the vision? Especially in a volatile technology climate where organizations seek to move their applications to the cloud, expose application capabilities to new endpoints, and deliver new capabilities quickly? In this session, we’ll explore techniques to increase architectural agility and provide some actionable advice that will help you get started immediately.
The first part of the Continuous Delivery workshop covers the differences between continuous integration, continuous deployment, and continuous delivery). It also introduces the deployment pipeline_, along with usage, patterns, and anti-patterns. This part concludes with some applied engineering principles.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
The first part of the workshop describes the technical differences between related topics such as continuous integration, continuous deployment, and continuous delivery. At the heart of the workshop is a pattern called the deployment pipeline, which involves the creation of a living system that models your organization's value stream for delivering software. I discuss the various stages, how triggering works, patterns and anti-patterns, and how to pragmatically determine what “production ready” means. This session also covers some agile principles espoused by the Continuous Delivery book, including new perspectives on things like developer workstations and configuration management.
Continuous Delivery relies on a variety of interlocking engineering practices to work efficiently; this session covers three related topics. First, I cover the role of testing and the testing quadrant. Second, I specifically cover version control usage and offer alternatives to feature branching like toggle and branch by abstraction. Third, I describe some incremental release strategies, along with their impact on other stages of project lifecycle.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
Continuous Delivery relies on a variety of interlocking engineering practices to work efficiently; this session covers three related topics. First, I cover the role of testing and the testing quadrant, including the audience and engineering practices around different types of tests. I also cover some best practices around testing, including testing ratios, code coverage, and other topics. Second, I specifically cover version control usage and offer alternatives to feature branching like toggle and branch by abstraction. Generally, I talk about building synergistic engineering practices that complement rather than conflict one another. In particular, I discuss why feature branching harms three other engineering practices and describe alternatives. Third, I describe some incremental release strategies, along with their impact on other stages of project lifecycle.
Two big stumbling blocks for Continuous Delivery adaptation are interactions with operations and the keepers of data. First in this session, I cover operations, DevOps, and programmatic control of infrastructure. Second, I discuss how to incorporate databases and DBA's into the Continuous Integration and Continuous Delivery process.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
Two big stumbling blocks for Continuous Delivery adaptation are interactions with operations and the keepers of data. First in this session, I cover operations, DevOps, and programmatic control of infrastructure using tools like Puppet and Chef. I also discuss the explosion of tool alternatives in this space, and cover some current-day best practices. Second, I discuss how to incorporate databases and DBA's into the Continuous Integration and Continuous Delivery process. This includes database migrations, strategies for enhancing collaboration between application development and data, and database refactoring techniques.
So you think you've picked up enough JavaScript to be dangerous, but feel like the whole prototypical language thing is still a mystery. In this session, we'll go from basic JavaScript to advanced JavaScript. We'll discuss and code modular JavaScript with CommonJS. We'll look into the details of a prototype language and discuss things like parasitic inheritance. We'll also look at JavaScript libraries that will help you get the most out of JavaScript - not jQuery, but a library like UnderscoreJS and SugarJS.
This is a fast paced session meant to bring you up to speed with the latest and greatest JavaScript techniques and tools. Whether you're building client side JavaScript with HTML5 or Appcelerator Titanium, or server-side JavaScript with node.js, you'll come away with knowledge and patterns for how the pro's use JavaScript for building real apps.
We've come a long way down the JavaScript road. Gone are the days of 'just hack it' for the web - architecting even a small project in JavaScript can be a challenge. Thankfully, there are several frameworks to help you; the most popular currently is Backbone.js.
Before you start using a framework in JavaScript, you will want to understand the techniques expert JavaScript programmers use to build them. In this session, we'll dive into design patterns in JavaScript, and do live coding so you can see these patterns applied. Even if you're not using a framework, you can use these design patterns to make your code more maintainabile, elegant, and concise.
JavaScript has a mixed heritage: OO and Functional. To date, us developers have focused on the OO side of JavaScript and not much mind-share has been given to the other, more powerful side. In this session we'll explore how to use the power of functional Javascript.
JavaScript has elements of two distinct programming languages: Self and Scheme. These two languages are very different - and some of JavaScript's weirdness is due to this mixing of very different language designs. The conceptual models are also very different between Self and Scheme - one is a prototypical object based language, while the other is a functional language. In this session, we'll discuss the elements of how the Scheme functional programming language manifest in JavaScript. We're going to explore how you can write JavaScript in a more elegant and powerful way by applying functional concepts.
Jasmine is a browser centric testing framework. It's the default test framework in Jasmine and is the most popular framework among JavaScript developers. It takes a BDD testing approach.
Mocha is a popular testing framework for JavaScript - for any JavaScript environment, including the Web Browser, NodeJS, and Titanium. It allows for simple asynchronous testing, test coverage, and integration to CI tools. In this session, learn all you need to know about getting started for writing beautiful tests in JavaScript for these environments. We’ll discuss a number of add-ons for Mocha to make testing a breeze. We’ll talk about “should” and “chai” for expectation matching. We’ll discuss “simon” for mocking in your test cases. We’ll also talk about test automation with Grunt.
This session is a code-driven class that covers the Jasmine and Mocha JavaScript testing library. It is an introductory level session.
In this session, we’ll look at some of the options for storing application and user data, such as simple JSON and SQLite. We’ll discuss the details of deciding to use SQL or a NoSQL solution. We’ll look at how to architect a data-driven web / mobile app, strategies for server synchronization, and take a deep dive into working with datasets in JavaScript.
Working with data in Web browsers can be tricky - fortunately the IndexedDB standard has come forward to help developers organize and store data. On mobile platforms, Phonegap allows the use of both IndexedDB and SQLite. Titanium, and open-source native platform for mobile based on JavaScript, also has a host of options for storing application and user data. We'll look at utilizing IndexedDB in Web Browsers, Phonegap and Titanium. We'll also see how to leverage SQLite in Phonegap and Titanium. This is a code-heavy session and all the code will be in JavaScript.
RabbitMQ is open source message broker software (sometimes called message-oriented middleware) that implements the Advanced Message Queuing Protocol (AMQP). While JMS works great in an all-Java environment, RabbitMQ works across all major platforms including Java, .NET, Python, Ruby, and JavaScript. RabbitMQ has the features you would expect from an enterprise grade message queue such as fault-tolerance, clustering, and high-availability. This is an introductory session to RabbitMQ.
RabbitMQ uses the AMQ protocol for messaging and features a nice management UI. While being lightweight and fast, it still provides excellent performance and scalability. In this session we'll cover the basics of RabbitMQ and run some live code samples.
It was over 10 years ago that Spring hit the scene and made a big impact in the enterprise Java development ecosystem. Now that Spring 4.2 is available (and Spring 5 on the way), there's a lot of new features and a lot that you may not know about yet.
Whether you're already working with Spring 4 or are anxious to make a move up, there's plenty of new tricks Spring has in store for you. We'll explore them all in this overview of everything that's new in Spring.
In this session, we'll see how to build real Spring applications using Spring Boot. We'll also look under the covers to see what makes Spring Boot tick.
Spring offers a number of configuration options: XML configuration, Java configuration, and Groovy configuration to name a few. To some degree, component-scanning and autowiring help eliminate some explicit configuration. But in general most Spring applications require some essential “bootstrap” configuration to enable key functionality. What's the right way to build Spring applications when there are so many choices?
What if I told you that configuration was optional?
Spring Boot is an exciting new programming model for Spring that makes it extremely easy to create stand-alone, production-ready Spring applications. Rather than writing lots of code to satisfy the needs of a framework, Spring Boot helps you focus your coding efforts on your application. Spring Boot takes an opinionated approach to configuring Spring, making it possible to create Spring applications with little or, in some cases, no Spring configuration at all!
In this session, we'll look at a technique known as Differential Synchronization, building on top of HTTP PATCH and JSON Patch to enable efficient two-way conversation between clients and the servers they deal with.
At some point, almost all client applications communicate with a server backend. But such communication comes with a cost. Network latency and bandwidth limitations can bring clients to their knees. And if the client is running on a mobile device, unnecessary communication can effect a drain on the device's battery.
Traditional REST, through which many clients and servers communicate, can be chatty, exhibiting the problems that kill client performance. In the interest of more efficient communication, it may be helpful to look beyond traditional REST. HTTP PATCH (and more specifically, JSON Patch) go a long way toward reducing the chatter. But these specifications introduce their own problems and fail to properly address two-way data binding between client and server.
Even taking into account everything that we've learned about operating at “web scale” in the last several years, most applications are designed and implemented to be, as Michael Nygard highlights in his book Release It!, “feature complete” not “production-ready.” Unfortunately, the architectural patterns associated with operating at scale – notably the move toward “micro-services” – introduce an order of magnitude more ways in which our applications can fail. If we do not plan well for failure, we will experience the ultimate failure. But if we do plan well for failure, we have a fighting chance at success.
This talk will take a tour through the James Chiles' book Inviting Disaster: Lessons From the Edge of Technology, and examine what we can learn from engineering fields in the physical world, applying those learnings to the way in which we develop fault tolerant applications.
Monolithic applications are difficult to understand, maintain, and extend with new features and functionality. Modularity helps achieve these goals. Unfortunately, few applications have been designed with modularity in mind. In this workshop, we take a deep dive into modularity.
In part 1, we'll start developing a software system using several of the patterns of modular architecture. We'll explore the patterns and then apply them to develop a sample application. Along the way, we'll discuss implementation variations and the consequences of our decision. When finished, we'll have a simple but useful application that you can take home with you and easily extend with new functionality. This session is all pure Java, and you'll be able to apply the techniques you learn immediately. Be sure to bring a laptop.
Monolithic applications are difficult to understand, maintain, and extend with new features and functionality. Modularity helps achieve these goals. Unfortunately, few applications have been designed with modularity in mind. In this workshop, we take a deep dive into modularity.
In part 2, we'll finish the exercise we began in part 1. We'll continue applying several of the patterns of modular architecture. Upon completing the application, we'll have a short retrospective to discuss the consequences of our design decisions. To wrap up, we'll explore how using a framework (OSGi) that supports modularity extends the benefits of our modular architecture over to runtime without impeding our ability to leverage our modules directly atop standard Java. Be sure to bring a laptop.
Even with the recent explosion in alternative languages for the JVM, the vast majority of us are still writing code in “Java the language” in order to put bread on the table. Proper craftsmanship demands that we write the best Java code that we can possibly write. Fortunately we have a guide in Joshua Bloch's Effective Java.
In his foreward to the first edition, Guy Steele writes about the importance of learning three aspects of any language: grammar, vocabulary, and idioms. Unfortunately many programmers stop learning after mastering the first two. Effective Java is your guide to understanding idiomatic Java programming.
Effective Java is organized into 78 standalone “items,” all of which will be impossible to cover in one session. Instead I've chosen a subset of the most important techniques and practices that are commonly missed by today's Java programmers. You'll pick from a menu and decide where we'll head. Regardless of the path we take, you'll leave this session thoroughly equipped to write better Java code tomorrow!
The Cloud Foundry engineering teams have steadily increased their use of the Go programming language (http://golang.org) for building (or rebuilding) components, starting with the Router, and progressing through Loggregator, the “cf” CLI, and more recently the Health Manager 9000.
As a Java-developer-turned-DevOps-junkie focused on helping our customers and community succeed with Cloud Foundry, it's become clear to me that I need to add Go to my knowledge portfolio.
Go is a very interesting language, open-sourced by Google in late-2009, that takes a “less is more” (http://commandcenter.blogspot.de/2012/06/less-is-exponentially-more.html) approach to language design, but that also bakes in a powerful concurrency model.
This talk will introduce Go, delve into its distinctives, and contrast its approach with that of Java (where appropriate). We'll also write a fair amount of Go code along the way. This talk will be of particular interest to Java developers looking to add Go to their toolkits, but will also be of interest to anyone looking to learn a little bit more about Go.
We make many assumptions when we develop our applications. Many of these assumptions no longer hold true when we start to build applications for the cloud. Cloud platforms also introduce architectural possibilities that do not exist in traditional deployment settings. This session will examine five architectural patterns that we can apply to our applications in order to prepare them for the unique characteristics of cloud environments.
We'll cover the following patterns:
Examples will focus on the application of these patterns using Java/Spring and Cloud Foundry-based PaaS platforms, but should be applicable to any language/framework/PaaS platform combination.
“Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.” Docker creates containers that provide running process with:
It does this by leveraging low-level Linux kernel primitives like cgroups and namepaces. The end result is a portable application container that can run anywhere Docker can run, including on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.
Containers are an excellent way to package your application such that it can run consistently everywhere you want to run it, a fantastic step toward Continuous Delivery. In this session we'll look at how to use Docker to package, deploy, and run Java applications and other services. We'll also compare Docker to another container solution, Warden, which is a key component of the Cloud Foundry PaaS.
Inspired by success stories from companies such as Amazon and Netflix, many organizations are moving towards microservice architectures at a brisk pace. This session provides a thorough overview of the pros and cons for microservice architectures, when it is applicable, and some nascent best practices.
Microservice architecture is important because it’s the first architecture to fully embrace the Continuous Delivery and DevOps revolutions. In this session, I cover the motivations for building a microservice architecture (including making the distinction between “regular SOA” and microservices), some considerations you must make before starting (such as transactions versus eventual consistency), how to determine service partition boundaries, and ten tips to dowse you towards success. I also discuss the role of polyglot development, enterprise governance, how data and databases fit into this new world, and tooling to help ensure consistency between core services like logging and monitoring. This session provides a thorough overview of the pros and cons for microservice architectures, when it is applicable, and some nascent best practices.
This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
Continuous Delivery is a process for automating the production
readiness of your application every time a change occurs – to code,
infrastructure, or configuration. In the Continuous Delivery world,
rather than hone skills at predicting the future via Big Design Up
Front, Continuous Delivery emphasizes techniques for understanding
and changing code with less cost during the process. Some architectures
and engineering practices yield better designs for this
environment. This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
A Technology Radar is a tool that forces you to organize and think about near term future technology decisions, both for you and your company. This talk discusses using the radar for personal breadth development, architectural guidance, and governance.
ThoughtWorks Technical Advisory Board creates a “technology radar” twice a year, a working document that helps the company make decisions about interesting technologies and where we spend our time. ThoughtWorks then started conducting radar-building exercises for our clients, which provides a great medium for technologists company-wide to express their opinions about the technologies they use every day. For companies, creating a radar helps you document your technology decisions in a standard format, evaluate technology decisions in an actionable way, and create cross-silo discussions about suitable technology choices. This session describes the radar visualization and how to conduct a radar building session for yourself. After a brief introduction, the bulk of the workshop consists of attendees building a radar for the group, following the same procedure you'll use when you do this exercise at your company. At the end, we'll have created a unique Radar for this event and practiced doing it for yourself.
In this session, we won't be moulding our own Lego bricks, but we will see how relatively new specifications combine to enable custom HTML elements to do anything we want them to do. We'll also have a look at some libraries built on top of Web Components make it easier than ever to build epic web clients.
HTML tags are like Lego pieces. Lego bricks come in many different shapes and colors and by piecing the right parts together, you can create some amazing things. But if you've ever tried to build anything significant with Legos, you've probably wished for some piece that doesn't exist. If only there was some way to craft your own Lego pieces to fit your imagination, you could make an epic creation. You have no choice but to work with the bricks you have.
Likewise, HTML defines several tags to address many needs for building web applications. But at some point there's going to be that one tag that you need that doesn't exist. If only there were some way to define your own HTML tags, you could build an epic web client. Unfortunately, you have no choice but to work with the tags HTML offers.
Next Generation Web Applications -> Aggregation of Resources
Next Generation Web Applications -> Aggregation of Resources
Almost every example of an agile project involves a single team and while many successful projects are delivered that way, most enterprise software requires the interaction of several teams. But how do we scale agile beyond a single team? What practices translate and which ones don't? In this talk we'll discuss some of the issues you'll encounter as you move agile beyond a single group and how you can keep multiple stakeholders happy. While it isn't as simple as having a “scrum of scrums” it isn't as hard as replacing every line of COBOL.
Almost every example of an agile project involves a single team and while many successful projects are delivered that way, most enterprise software requires the interaction of several teams. But how do we scale agile beyond a single team? What practices translate and which ones don't? In this talk we'll discuss some of the issues you'll encounter as you move agile beyond a single group and how you can keep multiple stakeholders happy. While it isn't as simple as having a “scrum of scrums” it isn't as hard as replacing every line of COBOL.
Technology changes, it's a fact of life. And while many developers are attracted to the challenge of change, many organizations do a particularly poor job of adapting. We've all worked on projects with, ahem, less than new technologies even though newer approaches would better serve the business. But how do we convince those holding the purse strings to pony up the cash when things are “working” today? At a personal, how do we keep up with the change in our industry?
This talk will explore ways to stay sharp as a software professional. We'll talk about how a technology radar can help you stay marketable (and enjoying your career) and how we can use the same technique to help our companies keep abreast of important changes in the technology landscape. Of course it isn't enough to just be aware, we have to drive change - but how? This talk will consider ways we can influence others and lead change in our organizations.
You understand the Web. Why do you do REST so wrong?
Doing REST “right” isn't a matter of conformance or purity. It isn't about pleasing hard-to-please Restafarian personalities. It's about understanding that WYBIWYG (What you Build is What You Get). Decisions have consequences. You are free to design and implement whatever you like, but you need to understand the consequences of your choices.
REST, as defined by Roy Fielding's thesis, is a collection of architectural constraints designed to yield certain properties in deployed systems. When you take shortcuts, you simply will not receive all the benefits of loose-coupling, evolvable, flexible, scalable systems. URLs are not enough. HTTP is not enough. To fully embrace the world of REST, you must understand Hypermedia. The good news is that you already do, you've apparently just forgotten.
Here's the thing though. REST is not an endpoint, it is just a beginning. Come to this talk to hear how the story starts. We will take a deep dive into why the Web works, the implications for building Hypermedia-driven REST APIs and start to look at what this means in practice.
The cost of integrating information isn't cheap. Well, at least it isn't if you do it wrong. Chances are, you're doing it wrong.
The single most difficult aspect of data integration is the effort to achieve consensus. It isn't just that we are disagreeable people. It's also that it is a fantasy that there is a “common model” or a “global truth”. Different groups and individuals see the world differently and have different needs from information systems. Language, and therefore what we call things, isn't simply reflective of reality. It plays a constructive and interpretive role.
The problem is that our technologies force us to make choices about world views and pretend that things aren't changing constantly. This yields fragile systems and high impedance to change that cascades through our organizations. This translates to expensive, rigid and difficult to extend failure to give the business what they want.
Our friend Tim Berners-Lee and his Happy W3C Merrymakers have given us a set of technologies to help us solve these problems though. We forget that the Web he designed was not the public Web, but one to solve integration needs for complex organizations like CERN. The HTML bit that we have gotten so excited about is but a small part of the vision. We will introduce RDF and SPARQL as enabling technologies. They do not necessarily replace what you already have, but they do make it possible to share information with people you've never talked to: Collaboration without Coordination.
Webs of documents are fabulous enough. Webs of data will blow your mind.
A table is a fixed structure. A tree is as well. A graph can go on forever and be extended at any time by anyone.
The Web is an unbounded graph. It is our definition of scale. What happens when we start to use it as the basis of sharing information, not just documents. This does not necessarily mean the public Web (although it is certainly appropriate for that as well). It simply means thinking of information as a web of linked entities through discoverable relationships.
Linked data is a way of doing this, but it is also an established project connecting billions of entities from disparate, unrelated sources. How does that even work? What can you do with such a thing? And what does that mean for your organization?
By building on the ideas introduced in the Data Integration talk, we will explore how webs of data built on standards can change everything.
Encryption is a powerful tool for privacy. At least that is what we're meant to think.
If you consider encryption to be a black box of magic, you should probably attend this talk.
If you think encryption will protect your secrets, you should probably attend this talk.
If you have (or haven't) been following the news, you should probably attend this talk.
The truth is, encryption can be an effective way of making it harder for people to steal your secrets. But it isn't magical, it isn't fool proof and, depending on how you are using it, may be completely useless. It is a hard topic that we'll only touch the surface on, but there are very few topics that are more crucial for our industry and profession to understand better than encryption. You don't have to understand the math (although that will help), but you do have to understand what it will and won't do for you*.
*and how implementations of it may have been intentionally compromised
Early releases of Java performed poorly, but those issues largely disappeared long ago with the introduction of HotSpot. However, much of the performance advice for Java persists through hearsay from those early days.
In this talk, we'll forget the hearsay and take an objective look using benchmarking and profiling tools to find out which optimizations matter today and just as importantly those that don't.
HotSpot promises to do wonders for us by Just-in-Time (JIT) compiling the “right” code for us, but how does it makes those decisions? And, perhaps more importantly, what happens when it's wrong?
In this talk, you'll learn through real code examples just how the JVM decides to compile your code, deoptimize your code, and stop-the-world for a GC.
Over the years, Java developers have learned through trial-and-error the best ways to use Java's type system. But certain parts of the type system like wildcard generics and covariant return types are
under used; in-part, because they are not well understood.
Fortunately by going back to the mathematical roots of type systems, we can understand not only wildcard types and covariance, but also why we should prefer composition over inheritance and even how compilers perform some of their optimization magic.
Most of us don't want to go back to the days of malloc and free, but the garbage collector isn't always our friend.
In this presentation, you'll learn about the different garbage collection strategies used in JVMs, how to monitor garbage collection, analyze memory dumps, and why you might want to use one collection strategy instead of another.
Take a look at your codebase. Go ahead, this abstract will wait. Notice anything? Perhaps a few more lines of JavaScript than years past? JavaScript is no longer an outlier, a language for the interns, something we can just mash together. Today, JavaScript is a first class citizen. As such, we need to treat it will all the care and feeding we extend our server side languages. This talk will introduce you to a set of tools that will help you write bulletproof JavaScript.
Step one, make sure we aren't making any basic mistakes like using == when we really mean ===. To remedy these types of bugs, we'll leverage JSHint to statically analyze our code. In addition to walking through the setup, we'll discuss how to ratchet up the rules as you improve your codebase. Just like Java or C#, we also need to test our JavaScript code. We'll introduce Jasmine, a BDD style testing tool as well as other tools that make help in the testing process. Last but not least, we'll take a tour of Plato, a JavaScript source code visualizer. Taken together, these tools can go a long way to improve your JavaScript code.
Back in the day, web developers had to rely on their wits and a plethora of alert statements - to say our toolkit was spartan would be an understatement. But with the increased importance of web front ends and the rise of JavaScript MVC frameworks, a modern web developer toolkit is finally emerging.
We've evolved from text editors to full fledged IDE's with code completion and refactoring tools but our toolchain doesn't end there. With multiple testing libraries, mocking frameworks, test drivers and even code coverage tools, today's web developer gets to walk downhill on a sunny day.
Many software developers point their career towards ascending to the gilded rank of Architect…but what does it mean to actually be one? While many of us labor under false pretense of abject technical decision making, the reality is often very different. You'll code less, spending more time on activities that lack an objective green/red bar. But you'll also an opportunity to impact far more than one project.
In this talk, I'll speak to my own journey. We'll touch on influencing coworkers, the importance of communication and the importance of cup of coffee.
This is a revised and updated version of the previous talk, with current thinking from practice and the literature. The talk presents why conflicts with your manager are inevitable based on differences in priorities and perspectives, and how to plan for them. The goal is to show you how to build the loyalty relationship that allows you to get what you need when you need it.
Topics covered will include diagnosing communication styles, lessons from game theory, working within the organizational hierarchy, and lessons on how to build a relationship with your manager that still allows you the freedom to express yourself and what you really want.
The basics of developing for the Android platform will be explored, from setting up the SDK to using the Android Studio IDE and the generated Gradle build files. No previous experience is required, other than a basic knowledge of Java.
After discussing how Android fits into the marketplace, we'll look at creating applications, how to use activities, and working with layouts.
Building on the the previous talk, we'll add intents, customized layouts for alternative configurations, talk about the activity lifecycle, use logging, and more.
We'll deploy to both emulators and connected devices, and change input styles.
This session will move beyond the basics cover Android persistence mechanisms, accessing RESTful web services, and more. We'll look at shared preferences, basic file I/O, and the Sqlite database. We'll also show how to operate off of the UI thread to access data from remote servers.
The application will access JSON data on a remote server, parse the data, and update the user interface based on the response.
Java EE 7 is around the corner and the horizons for Java EE 8 are emerging. This session looks into the key changes the community can expect. The goal of this session is to foster interest and discussion around these changes.
Some of the changes discussed include retiring EJB 2 entity beans and JAX-RPC, greater alignment with CDI, WebSocket/HTML 5 support, a standard API for JSON processing, the next version of JAX-RS, an overhaul of JMS, long-awaited concurrency utilities, batch processing in Java EE and much, much more.
Domain-Driven Design (DDD) promises to simplify enterprise application development and is gradually gaining traction as an alternative to traditional four-tier architectures originally popularized by J2EE. As the name implies, DDD is an architectural approach that strongly focuses on materializing the business domain in software. This session demonstrates first-hand how DDD can be implemented using Java EE via a project named Cargo Tracker.
Part of the Java EE Blue Prints, Cargo Tracker seamlessly maps concepts like bounded contexts, layered architectures, entities, value objects, aggregates, services, repositories and factories to realistic code examples. The Cargo Tracker project also embraces popular practices adopted by the DDD community such as Object Oriented Analysis, Domain Models, Test Driven Development, Agile Refactoring, Continuous Integration, Object Relational Mapping, Dependency Injection and Cross-Cutting Concerns, incorporating these concepts into a realistic Java EE application.
This session examines the efforts with JSR 356 to support WebSocket in the Java programming model, from its base-level integration in the Java Servlet and Java EE 7 containers to a new, easy-to-use API and tool set that is part of the standard enterprise Java platform.
The family of HTML5 technologies has pushed the pendulum away from thin client technologies and toward ever-more-capable Web clients running on today’s browsers. In particular, WebSocket brings new opportunities for efficient peer-to-peer communication, providing the basis for a new generation of interactive and “live” Web applications. JSR 356 brings these capabilities to server-side Java developers.
The session is a deep dive into JSR 356 including some demos.
Java 8 is finally released. We haven't seen this number of changes to Java the language since Java 5. This session will provide a review of the changes to Java 8 with a focus on the language changes such as the addition of lambdas which will either have you excited for a language feature which allows you to express what you want vs. how you want it or will have you concerned about readability in your code base.
This session will introduce the changes of Java 8 including language changes such as:
and will provide details of the JVM changes such as:
If there is a safe bet, it is that everything fails. From hard drives to power supplies, from routers to software, everything fails. Frankly a planned upgrade of a production service is planned down time, or a failure to continue services for a planned period of time.
This session will look at planning for failure by deploying into an Apache Mesos cluster.
This session will delve into how Apache Mesos identifies failures and how it protects from them. It will also look at how to upgrade infrastructure without down time.
The drive to continuous delivery, micro services and PaaS includes the need to auto scale and potentially auto provision services. This session will identify the legacy thinking of a statically partitioned world and drive to the scalable world of Apache Mesos.
This session will look at the failings of the many of today's cloud technology, identify the goals we have and look into the tooling of how to get there. In this session we will look at:
This session will be packed with demonstrations.
Didn't you hear the news? TDD is dead. Yet many developers rely on it for quality code. Come join the zombie apocalypse and learn and understand TDD, what sets it apart from unit testing after the fact, what to do when you need to update code, effective mocking, automatically generating test data and lots of it, leaving code alone and respecting your work, and more
This presentation will cover:
Time is very precious and is often threatened by phone calls, emails, co-workers, bosses, and most of all, yourself. The Pomodoro Technique reigns in unfocused time and gives your work the urgency and the attention it needs, and it's done with a kitchen timer.
In this presentation we discuss how to set up, estimate time, log time, deal with interruptions, and integrate with Agile as a team. We discuss timer software and even some of the great health benefits of the Pomodoro Technique.
The sea change in HTML 5 is likely to shift the pendulum away from today's thin-client based server-side web frameworks like Struts 2 and JSF to JavaScript powered next generation rich clients. With strong support for REST, WebSocket and JSON, Java EE 7 is well positioned to adapt to this change.
In this heavily code driven session, we will show you how you can utilize today's most popular JavaScript rich client technologies like AngularJS, Backbone, Knockout and Ember to utilize the core strengths of Java EE using JAX-RS, JSR 356/WebSocket, JSON-P, CDI and Bean Validation.
Enough with the darn JavaScript frameworks already! There's nothing wrong with a judicious use of this ubiquitous programming language, but it's gotten a little out of hand. What if there were an evolvable future state of declarative and encapsulated user interface elements that was available today in most modern browsers?
There is! The Polymer Project is a young but impressive glimpse into where things should go.
It's goals are:
This talk will introduce you to:
Scala is known for both its clarity in some cases, and its obscurity in others. Well, this presentation sticks with the obscurity. We will cover abstract types, the Predef, implicit conversions, creating infix types, singleton types, type variance, type bounds, type variance, partially applied functions vs. partial functions, type projections, and overcoming type erasure using Manifests.
This presentation is geared in at the programmer level and not at a scientific level, with no type theory or calculus. Basic knowledge of Scala is preferrable, though not required. Asking tough questions is mandatory.
The single most important tool in any developers toolbox isn't a fancy IDE or some spiffy new language - it's our brain. Despite ever faster processors with multiple cores and expanding amounts of RAM, we haven't yet created a computer to rival the ultra lightweight one we carry around in our skulls - in this session we'll learn how to make the most of it. We'll talk about why multitasking is a myth, the difference between the left and the right side of your brain, the importance of flow and why exercise is good for more than just your waist line.
The single most important tool in any developers toolbox isn't a fancy IDE or some spiffy new language - it's our brain. Despite ever faster processors with multiple cores and expanding amounts of RAM, we haven't yet created a computer to rival the ultra lightweight one we carry around in our skulls - in this session we'll learn how to make the most of it. We'll talk about why multitasking is a myth, the difference between the left and the right side of your brain, the importance of flow and why exercise is good for more than just your waist line.
Building a PaaS
Building a PaaS
In the words of John Gage, “The network is the computer”. At the heart of everything we do is a complex system of infrastructure from which we are often abstracted. For general application development this abstraction provides the convenience of simplifying our efforts. With a growing number of mobil applications with intermittent connectivity and higher latency, and with increased hostility on the network from a security standpoint, there is great value in pulling back the curtain and understanding the details of this computer.
This session will start with the underlying understanding of networking at a low level. At this level we will discuss, IP, MAC, ARP, DNS and DHCP. As we walk up an abstraction level, we discuss sockets, NAT, gateways and firewalls along with the use of TCP and UDP. Spending some time at this layer can make network developers more productive, as we look at tools which help us answer the question, “who owns this port?“, “where is this packet going?” and “What is my latency and why?“.
The session will end with a little fun looking at wifi, where will we sniff, snort, crack:) From a security stand point we will look at the challenges of wifi and how it has become the weakest component on the net.
This session is extremely fast-pace. The attendees will come away with a more enhanced understanding of the this thing we call the internet. It will include through discussion or demonstration tools such as tcproute, tcpdump, nemesis, nmap, tcpmon and wireshark.
Building a PaaS
Building a PaaS