Getting software released to users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. This workshop focuses on the Deployment Pipeline concept from Continuous Delivery.
In this workshop I move from release back through testing to development practices, analyzing at each stage how to improve collaboration and increase feedback so as to make the delivery process as fast and efficient as possible. At the heart of the workshop is a pattern called the deployment pipeline, which involves the creation of a living system that models your organization's value stream for delivering software. I spend the first half of the workshop introducing this pattern, and discussing how to incrementally automate the build, test and deployment process, culminating in continuous deployment.
Getting software released to users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. This workshop focuses on the agile infrastructure required to implement a deployment pipeline and continuous delivery.
In this workshop, I introduce agile infrastructure, including the use of Puppet to automate the management of testing and production environments. We discuss automating data management, including migrations. Development practices that enable incremental development and delivery will be covered at length, including a discussion of why branching is inimical to continuous delivery, and how practices such as branch by abstraction and componentization provide superior alternatives that enable large and distributed teams to deliver incrementally.
This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
Continuous Delivery is a process for automating the production
readiness of your application every time a change occurs – to code,
infrastructure, or configuration. In the Continuous Delivery world,
rather than hone skills at predicting the future via Big Design Up
Front, Continuous Delivery emphasizes techniques for understanding
and changing code with less cost during the process. Some architectures
and engineering practices yield better designs for this
environment. This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
There is a good amount of excitement about the new version of Java. The big
evolution of course is the lambda expressions. In this presentation we will dive into the language features in Java 8, take a look at some of their nuances, and look at ways to put them to good use.
Java 8 language capabilities and application.
Programming concurrency has turned into a herculean task. I call the traditional approach as the synchronized and suffer model. Fortunately, there are other approaches to concurrency and you can reach out to those directly from your Java code.
In this presentation we will discuss the actor based concurrency and also the software transaction memory. We will then develop examples using AKKA and compare the power of these approaches in contrast to the traditional approach.
Invoke dynamic is a Java 7 feature that had the most impact at the bytecode level and also in terms of performance. First perceived as a feature to help dynamically typed languages on the JVM, it turned into a powerful feature that has been exploited quite a bit in the implementation of features in the Java
language itself.
In this presentation we will understand what problem this features really solves and how it has influenced other features in the Java language and on the Java platform.
Poorly designed web applications fail to serve both the business and the users, leading to a unnecessary costs, and frustrated customers. By keeping the user in mind, and following a few simple guidelines, you can make huge leaps in the way your users interact with your applications.
In this session, we will focus on the enterprise - where the user-base is known and fairly consistent, but typically where user interface and interaction are not the top most priorities. We will discuss some rules for good design, look at a few simple design elements that can make your web apps shine, and your users rejoice.
In this session, we will take a look at Angular - a new MVC framework by Google. We will discuss some of the terminology that Angular offers, and see how we can use that to develop highly interactive, dynamic web applications. See “Detail” for a list of topics I cover and the Github repo URL
This is an intro-level talk we will take a look at Angular and developing rich web applications. Angular embraces HTML and CSS, allowing you to extend HTML towards your application, and uses plain JavaScript which makes your code easy to reuse, and test.
Note: This is an intro level talk. It is targeted towards developers who are curious about Angular and want to learn about the fundamental features and concepts in Angular.
Topics Covered -
ng-app
ng-init
and the evaluation {{ }}
directive$rootScope
ng-model
$scope
)ng-repeat
ng-form
, form validation and submission in AngularJS$http
GitHub URL - https://github.com/looselytyped/angudone-backend/tree/solutions
After almost a decade and several significant releases, Spring has gone a long way from challenging the then-current Java standards to becoming the de facto enterprise standard itself. Although the Spring programming model continues to evolve, it still maintains backward compatibility with many of its earlier features and paradigms. Consequently, there's often more than one way to do anything in Spring. How do you know which way is the right way?
In this session, we'll explore several ways that Spring has changed over the years and look at the best approaches when working with the latest versions of Spring.
In this presentation, we'll see how to use Spring to create, secure, streamline, hyperlink, and consume REST APIs.
In modern applications, there are a diverse array of clients consuming content from the web. Each of these clients has unique capabilities and limitations, therefore demanding presentation of the application to be tailored to each device. As a result, presentation logic is often pushed into the client itself, leaving the application to serve a common data-oriented lightweight API to be consumed by each client.
In this session, we're going to combine the magic of Spring Boot and the magic of Spring Data to yield something even more powerful. You'll see how to quickly build an application's persistence layer, whether it stores data in a RDBMS, Mongo, Neo4j, or several other popular data stores. You'll also see how to create a functioning REST API with nothing more than an interface and a domain type.
Spring Boot dramatically simplifies application development with Spring. But before Spring Boot came along, Spring Data was already making developers' lives easy when it comes to working with data. When combined, Spring Data and Spring Boot can make data persistence the easiest part of your application.
So you've gotten a handle on Git and know how to use it for everyday development tasks like committing code and pushing and pulling changes with the rest of the team. But do you really know how it works under the covers? In this brief demonstration, we'll commit a file to a brand new repository without ever touching the git add or git commit commands, and in the process learn some critical Git internals that every power user should know.
We'll also take a look at some advanced history and undo commands like reflog and reset, and how to rewrite past mistakes with interactive rebase. Bring your questions and Git challenges for 90 minutes of advanced Git fun!
Most developers think of Git and GitHub as two sides of the same coin, but all too often our attention is focused on the Git side alone, and not on the capabilities of Planet Earth's most-used Git hosting service. More than two million developers have already joined the site that offers amazing features like pull requests, wikis, project pages, integrated web site hosting, issue tracking, metric visualizations, permission controls, and easy integration with third-party services.
Come to this talk to learn how to make better use of GitHub through the site's commonplace and advanced features alike.
By now, we are all familiar with the new orthodoxy: the product owner discerns the needs of the customer and feeds them to developers in the form a prioritized backlog. Developers pull work from that backlog, always confident that they're working on the highest-priority feature at the moment, and never having to worry about how those priorities are allocated. This system is simple, efficient, and has helped many teams function better than they used to.
Shakespeare wrote, “The first thing we do, let's kill all the lawyers.” It might be time to apply this aphorism to product management.
A few revolutionary companies are experimenting with the idea that developers should be in charge not only of when they build new features, but what features to build. Rather than mere code technicians following the will of a product and marketplace expert, developers themselves become experts in their product domain, building the tools users need—by conceiving of those tools themselves. Dispensing with the product owner creates an entirely new organizational tenor: one in which everyone is encouraged to master the business's domain, to organize their work in autonomous ways, and to take ownership of the purpose for which the organization exists.
Come ready to hear ground-breaking ideas and engage in group discussion about how these ideas might be put into practice in your workplace.
A Technology Radar is a tool that forces you to organize and think about near term future technology decisions, both for you and your company. This talk discusses using the radar for personal breadth development, architectural guidance, and governance.
ThoughtWorks Technical Advisory Board creates a “technology radar” twice a year, a working document that helps the company make decisions about interesting technologies and where we spend our time. ThoughtWorks then started conducting radar-building exercises for our clients, which provides a great medium for technologists company-wide to express their opinions about the technologies they use every day. For companies, creating a radar helps you document your technology decisions in a standard format, evaluate technology decisions in an actionable way, and create cross-silo discussions about suitable technology choices. This session describes the radar visualization and how to conduct a radar building session for yourself. After a brief introduction, the bulk of the workshop consists of attendees building a radar for the group, following the same procedure you'll use when you do this exercise at your company. At the end, we'll have created a unique Radar for this event and practiced doing it for yourself.
Although Agile has proven to provide incredible benefits in software development and delivery, it is not foolproof, nor a “Silver Bullet.” Plenty of factors need to be considered before attempting this highly disciplined approach.
Learn from the mistakes other organizations have made and discover which pitfalls to avoid to ensure that your first attempt at applying an Agile approach will be met with a successful outcome. This hour-long web seminar will explore these areas and provide clear steps your team and organization should consider to provide a clear set of tools to maximize the opportunity for best results possible.
Some come to Agile assuming it involves less discipline than their traditional methods, but this is a misperception. Today, the need for discipline in software development is greater than it ever was. Agile answers that need, arriving at discipline through the Team. Agile Teams must collaborate to develop strong discipline in both planning and execution.
We'll discuss how teams can obtain Agile discipline to achieve one of our core principles of delivering “working software” frequently. We'll explore some of the key Agile planning and engineering practices like continuous planning, Test-Driven Development, Continuous Integration and Acceptance testing. We'll look at the discipline involved in these practices, their inter-relationship, and the benefits they realize in delivering value to the customer.
The basics of developing for the Android platform will be explored, from setting up the SDK to using the Android Studio IDE and the generated Gradle build files. No previous experience is required, other than a basic knowledge of Java.
After discussing how Android fits into the marketplace, we'll look at creating applications, how to use activities, and working with layouts.
Building on the the previous talk, we'll add intents, customized layouts for alternative configurations, talk about the activity lifecycle, use logging, and more.
We'll deploy to both emulators and connected devices, and change input styles.
In modern applications, Javascript is increasingly prevalent both on the client-side and to some degree on the server-side. As we continue to crank out more Javascript code, we're finding that many of the same hard-lessons we learned in writing decoupled Java code are equally desirable in Javascript code. Without the benefit of dependency injection and AOP, both Java and Javascript code can quickly become an unnavigable and untestable mess.
Where frameworks like Spring have helped us gain control over our Java code, Cujo.js similarly aims to give our Javascript code more structure and testability.
In this session, we'll look at Cujo.js, an “unframework” that provides dependency injection that takes Javascript's unique needs into consideration to create loosely-coupled code. We'll also see how, although Cujo.js isn't strictly a UI framework, elements of Cujo.js can be brought together to elegantly build client-side UIs.
Relational databases have ruled the world since the dawn of time (or so it appears). They power our enterprises and for many in the corporate world, it may be hard to imagine life without them. Each decade a novel idea would challenge the status quo and make a case to deviate for the tradition. A flock of enthusiastic programmers, like your humble speaker back in the early 90s, would throw their support around it, only to be crushed eventually by the large vendors and enterprise standards. But, the excitement around NoSQL has shown that enterprise data is not the only thing that's persistent.
In this presentation we'll learn how NoSQL deviates from those deep rooted traditional approaches, and how this may be useful. We will also discuss the situations where these types of databases may be more appropriate.
Developing a rich user interface for web applications is both exciting and challenging. HTML 5 has closed the gaps and once again brought new vibe into programming the web tier. Come to this session to learn how you can make use of HTML 5 to create stellar applications.
.
Attendees are expected to pair up and work on the labs. Software requirements:
User experience and rich interaction is top in the list of things that influence the success and adoption of applications. Such richness and interactions were owned by desktop and native applications in the past. Over the recent years the web has become increasingly interactive, but the true richness was still lacking. But all that has changed with HTML 5 canvas and animation techniques.
Come to this session to learn how to create rich interactive applications with good animation, mouse and touch based interactions.
Mobile devices are gaining popularity but the diversity of these devices poses a challenge for development. HTML 5 promises to bring a consistent API to program the client side for both desktop and diverse mobile devices. One additional challenge with mobile devices is network connectivity, or the lack of it. Mobile devices are often taken into remote areas with low or no connectivity. While a native app can run on the device anytime, a web based application by default depends on having connectivity, unless of course we use the offline capabilities of HTML 5. Come to this presentation to learn how to create applications that can switch seamlessly between connected and offline mode.
Learn about offline options, configurations, local storage and how to program all of that, in this highly interactive, live coding session.
What do you need to know about combinatorics, number theory, and the underpinnings of public key cryptography? Well, maybe more than you think!
In this talk, we'll explore the branch of mathematics that deals with separate, countable things. Most of the math we learn in school deals with real-valued quantities like mass, length, and time. However, much of the work of the software developer deals with counting, combinations, numbers, graphs, and logical statements: the purview of discrete mathematics. Join us for this brief exploration of an often-overlooked but eminently practical area of mathematics.
The Spock framework brings simple, elegant testing to Java and Groovy projects. It integrates cleanly with JUnit, so Spock tests can be integrated as part of an existing test suite. Spock also includes an embedded mocking framework that can be used right away.
In this presentation, we'll look at several examples of Spock tests and review most of its capabilities, including mock objects and integration with Spring.
Where do defects come from? Technical debt is often one of our biggest challenges as poor design and defects are built up over time by cutting a corners here and there. We will discuss some key technical metrics that can shine light on these defects before they get out of control and find those that are out of control and worth your attention now.
We'll explore how the psychological effect of simply measuring technical metrics and making them visible can have immense impacts on reducing the occurences going forward.
In this session we will look to see how we can refactor our learning - what tools, and methodologies can we use to help us learn quicker and better - how we can create a store that gives us quick access to information when we really need it.
We all work in an industry in which not only do the tools that we use change ever few years, but one in which we have to shift the very paradigms these tools are built on! Even the most trivial of projects entails tens of different toolkits, frameworks, and languages coming together, and somehow we need to know how to leverage each one. How does one keep up? Despite all our years in schools, and our in-born nature to learn, we often are never taught how to learn. How can we learn faster, and retain even more?
In this session we will take a look at various tools and techniques available to us and see how we can make our learning effective.
Find yourself overwhelmed with hundreds of to-dos? Is your hard-drive littered with dozens of killer ideas that you started with enthusiasm and then just fizzled away? Do you feel like you are moving as fast as can but only getting to the wrong place quicker? Well perhaps this session will help.
There are various techniques and strategies available to us today that aim to help with exactly this conundrum - from Getting Things Done ™ to Personal Kanban. Unfortunately it is often easy to be extremely productive using these systems, but not very effective. After all, it's not about getting things done, but getting the RIGHT things done. In this talk we will discuss not only how to get things done, but also attempt to figure out what it is you actually need to be doing.
In this session, I will attempt to show you how you can leverage various strategies to be more effective, knock to-dos out and have fun while doing it. If time permits we will close with an overview of the tools that are available to you, and how you can use these to become a to-do list ninja :)
The key to understanding Clojure is ideas, not language constructs.
In this talk, we will approach Clojure via 10 Big Ideas.
Each of these ideas is valuable and useful a la carte, not necessarily
only in a Clojure together. Taken together, they beging to fill in the
picture of why Clojure is changing the way many programmers think
about software development.
The software industry changes rapidly, but you can protect yourself
from these changes by creating code that is complicated enough that
only you can maintain it.
Of course you should not engage in obvious bad practices. The good
news is that you don't have to. You can follow idiomatic industry
practice and stay buzzword compliant with the latest trends, while
quietly spreading complexity throughout systems. Better yet, the
symptoms will show up not in your own code, but in other code that
uses your code, directly or indirectly. You will be a hero as you
lead larger and larger teams burning the midnight oil to keep systems
alive.
Practice these principles, and your code will have an
infectious complexity that guarantees you will always be needed to
maintain it.
Simulation allows a rigorous, scalable, and reproducible approach to testing. The separation of concerns, and the use of a versioned, time-aware database, give simulation great power. This talk will introduce simulation testing, walking through a complete example using Simulant, an open-source simulation library.
Simulation allows a rigorous, scalable, and reproducible approach to testing:
Simulation begins with statistical models of the use of your system. This model includes facts such as “we have identified four customer profiles, each with different browsing and purchasing patterns” or “the analytics query for the management report must run every Wednesday afternoon.” Models are versioned and kept in a database.
The statistical models are used to create activity streams. Each agent in the system represents a human user or external process interacting with the system, and has its own timestamped stream of interactions. With a large number of agents, simulations can produce the highly concurrent activity expected in a large production system.
Agents are scaled across as many machines as are necessary to both handle the simulation load, and give access to the system under test. The simulator coordinates time, playing through the activity streams for all the agents.
Every step of the simulation process, including modeling, activity stream generation, execution, and the code itself, is captured and stored in a database for further analysis. You will typically also capture whatever logs and metrics your system produces.
Since all phases of a simulation are kept in a database, validation can be performed at any time. This differs markedly from many approaches to testing, which require in-the-moment validation against the live system.
The separation of concerns above, and the use of a versioned, time-aware database, gives simulation great power. Imagine that you get a bug report from the field, and you realize that the bug corresponds to a corner case that you failed to consider. With a simulation-based approach, you can write a new validation for the corner case, and run that validation against your past simulation results, without ever running your actual system.
This talk will introduce simulation testing, walking through a complete example using Simulant, an open-source simulation library.
Traditional automated testing approches combine input generation, execution, output capture, and validation inside the bodies of single functions. Generative testing approaches gain expressive power by isolating these steps.
With generative testing:
There are a number of benefits to this approach:
This talk introduces test data generation and generative testing, using for its examples the data.generators and test.generative libraries developed by the author.
The goal of core.async is to decouple producers and consumers of
information in software, without dictating specific thread or blocking
semantics, and without introducing callback hell. Also, to do all of this
as a library, portable anywhere you can run a dialect of Clojure.
Queues are a powerful tool for decoupling software
programs. Unfortunately, platforms that have powerful queueing
libraries (e.g. Java) may require blocking threads on the ends of the
queue. And the world's biggest platform (JavaScript in the browser)
doesn't even have queues or threads.
core.async builds upon the work done with Communicating Sequential
Processes,
and provides:
In this talk, we will cover the design of core.async, and then move
directly to exploring core.async's capabilities:
>!!
and take <!!
go
blocks to invert control, achieving async without threads>!
and take <!
alts!
and alts!!
Finally, we will assemble these primitives into substantial working
programs, building toward the Holy Grail of async: substantial UI
application development in the browser, with no callbacks in sight.
This is the first in a new series on resource-oriented systems. The goal of the series is to provide practical guidance on the design and implementation of next generation systems that are flexible, extensible, high-performance and future-friendly. The talks are designed to work as arc, building upon each other, but they should also stand alone. The first topic is a guided walk through of building quality REST APIs.
We will focus on the architecture of the Web and how it can help us model and manipulate our important business concepts. We will discuss the role of stable identifiers, intentional representation design, hypermedia affordances and architectural consistency. The goal is not to be “RESTful”, the goal is to build systems that display the properties we require.
This talk will be accessible for people new to REST, but also different enough that those who have attended previous REST talks will learn new things.
This is the second in a new series on resource-oriented systems. The goal of the series is to provide practical guidance on the design and implementation of next generation systems that are flexible, extensible, high-performance and future-friendly. The talks are designed to work as arc, building upon each other, but they should also stand alone. This second talk is an introduction to the use of Semantic Web technologies to enable collaboration without coordination.
REST is a means to an end, but it is not a satisfactory end state. It usually pushes complexity to the client in ways that make data integration difficult across multiple sources. The W3C Semantic Web initiative introduces us to new technologies for linking resources and querying across them in powerful new ways. We will learn about the RDF model, what it brings to the table and how we can use it connect information regardless of where and how it is stored. We will use the SPARQL protocol and query language to ask powerful questions of arbitrary resources. We will also see how we can create new information just by asking for it.
This is the third in a new series on resource-oriented systems. The goal of the series is to provide practical guidance on the design and implementation of next generation systems that are flexible, extensible, high-performance and future-friendly. The talks are designed to work as arc, building upon each other, but they should also stand alone. This third talk will introduce you to RDFa, one of the most exciting technologies estimated to be used on at least 25% of the indexed Web.
We understand that documents contain information, but it is usually only accessible to humans if they know where and how to find them. What if we could automatically extract arbitrary information about arbitrary domains and connect it to information held elsewhere? What if we could use the information in a document to help us organize our content better? What if this embedded information could help external search engines index the public Web better and improve your rankings?
This talk will show you how to weave and extract information in HTML, XHTML and arbitrary XML using standard tools such as RDFa. In the process, you will learn how to free the information so that it may be reused in powerful and unanticipated ways.
If you're not terrified, you're not paying attention.
Publishing information as webs of data does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources.
The surge of interest in the REpresentational State Transfer (REST) architectural style, the Semantic Web, and Linked Data has resulted in the development of innovative, flexible, and powerful systems that embrace one or more of these compatible technologies. However, most developers, architects, Information Technology managers, and platform owners have only been exposed to the basics of resource-oriented architectures.
This talk, based upon Brian Sletten's book of the same name, is an attempt to catalog and elucidate several reusable solutions that have been seen in the wild in the now increasingly familiar “patterns” style. These are not turn key implementations, but rather, useful strategies for solving certain problems in the development of modern, resource-oriented systems, both on the public Web and within an organization's firewalls.
In our industry, we have a problem. It's called the Software Problem. It is an embarrassing indictment of our capacity to deliver quality software on time and under budget. Beyond that, when we do deliver running code, it is often fragile and hard to extend. There are many reasons for this and many solutions. But one that does not get enough attention is how we approach information.
This is a theoretical discussion. You may not learn something you can use right away, but you may learn new ways of thinking about designing and building systems with an information-centric focus. We will discuss the roles of databases, services, software models, REST, the Web and the roles they all play together.
Logic programming delivers great expressive power: you work in domain abstractions, and your program figures out execution details. And it has never been easier. With the Datalog engine in Datomic Free, you can work both with databases and with your own data in plain Java collections.
Datalog queries have the expressive power of SQL + recursion, so is a good fit for systems that want “the power of SQL” but e.g. more flexibility. When combined with a universal schema, Datalog is particularly easy to learn and to read.
Datomic's datalog implementation works with immutable database values, providing additional benefits:
In this talk you will learn
Robert Martin assembled the SOLID family of principles to provide a useful guide to help us create object-oriented software designs that were resilient in the face of change. In recent years, the need to write highly-concurrent software in order to leverage increasingly ubiquitous multicore architectures, as well as general interest in more effectively controlling complexity in large software designs, has driven a renewed interest in the functional programming paradigm. Given the apparent similarity in their goals, “What is the intersection of SOLID with functional programming?” is a natural question to ask.
In this talk, we'll explore this intersection. We'll begin with a tour of the evolutionary patterns associated with enterprise software and programming paradigms, as well as take a look at the ongoing quest for best practices, the goal being to elucidate the motivation for examining this intersection of SOLID and functional programming. We'll then walk through each of the SOLID principles, examining them in their original object-oriented context, and looking at example problems and solutions using the Java language. Then for each principle, we'll examine its possible intersection with the functional programming paradigm, and explore the same problems and solutions using the Clojure language. We'll close by examining the transcendent qualities of the SOLID principles and how they can make any design simpler, regardless of the programming paradigm employed.
For much of the last two years I've delivered a two-part series at NFJS shows entitled “Effective Java Reloaded.” For all pracical purposes, it is an ala carte style rehash of the book Effective Java, written by Josh Bloch. One of my favorite parts of the discussion is of Item #15, which tells us to “Minimize Mutability.” If we turn this inside out, we're actually saying that we want to MAXIMIZE IMMUTABILITY. When we do this, we reap many benefits, such as code that is easier to reason about and that is inherently thread-safe. This can carry us a long way in the direction of program correctness and decreased complexity. However, when we start to program with immutability, several major questions arise.
First, the necessity of using a separate object for each distinct value, never reusing, or “mutating” an object, can quickly cause performance concerns. These concerns are amplified when we're talking about large collections such as lists and maps. These problems are largely solved by what we call “persistent data structures.” Persistent data structures are collections from which we create new values, not by copying the entire data structure and apply changes, but by creating a new structure which contains our changes but points at the previous structure for those elements which have not changed. This allows us to work with data structures in a very performant way with respect to time and resource consumption. We'll examine persistent data structures, their associated algorithms, and implementations on the JVM such as those found in the TotallyLazy library.
Second, because all of an immutable object's state must be provided at the time of construction, the construction of large objects can become very tedious and error prone. We'll examine how the Builder pattern can be applied to ease the construction of large objects, and we'll examine Builder implementations in Java and Groovy.
Third, we run into problems when we start to use frameworks that expect us to program in a mutable style. A prime example is Hibernate, which expects our persistent classes to follow the well-worn JavaBean convention, including a no argument constructor and getters and setters for each property. Such a class can never be mutable! So how do we program with frameworks such as Hibernate and yet still minimize mutability? The key is found in not letting frameworks dictate the way that you design your code. Just because the framework require something, don't let it force you to make the wrong decision. Use the framework as a tool to write your code, don't let your code be a tool of the framework. We'll examine strategies for doing exactly that.
You should come away from this talk better equipped to program in a way that minimizes mutability and maximizes immutability.
In recent years, the cloud has gone from Larry Ellison's “Maybe I'm an idiot, but I have no idea what anyone is talking about,” to Microsoft's “TO THE CLOUD!” to a central part of many companies IT strategy. At the same time, the way that we consume the cloud has continued to evolve. Many of today's cloud efforts revolve around utilization of various “infrastructure as code” products (e.g. Puppet and Chef) and homegrown automation to create deployment pipelines. When we start at this level, we often end up reinventing many of the same wheels as we climb the abstraction ladder.
Platform as a Service (PaaS) offerings are positioned to allow developers (and operators) to start climbing the abstraction ladder from a higher rung, shifting the model from machine-centric deployment to application-centric deployment. This session will focus on life as an application developer using Cloud Foundry as a PaaS, with demos using Pivotal's Hosted CF at http://run.pivotal.io.
We'll cover the following topics:
All parts will require:
Needless to say you'll need a laptop! For best experience, a Mac or Linux environment is ideal (it's technically possible to get most of this working on Windows, but it's VERY DIFFICULT). Also, you'll need at least 8 GB of RAM to support running Cloud Foundry locally without too much of a performance hit.
Part one of the workshop will require a free account on Pivotal Web Services. You can create a 60-day trial with no credit card at https://run.pivotal.io.
Parts two and four will require the use of BOSH Lite. More instructions will be provided at the conference. But, to ease setup, please install the following dependencies (in addition to those for part one!):
An alternative to setting up BOSH Lite if you have trouble with the environment (or you are stuck on Windows!) is to use TryCF. This only requires an AWS account (obviously a credit card required) and an email address, and eliminates the need for this entire dependency list. The estimated cost is 28 cents per hour, so you won't run up a huge bill!
Part Three will require the use of a FREE Cloudbees account. Create one at https://grandcentral.cloudbees.com. Once you're set up, click on “Get Started with Builds” to provision your Jenkins instance. We'll complete the remainder of the setup during the lab, but it's important to get the Jenkins instance provisioned in advance so that you don't lose 10 minutes of workshop time!
It will also require the use of a FREE TRIAL account of Artifactory Online from JFrog. You can obtain an account at https://www.jfrog.com/registration/registration.html. This one is a bit quicker, but you still should try to create it before the workshop.
Even with the recent explosion in alternative languages for the JVM, the vast majority of us are still writing code in “Java the language” in order to put bread on the table. Proper craftsmanship demands that we write the best Java code that we can possibly write. Fortunately we have a guide in Joshua Bloch's Effective Java.
In his foreward to the first edition, Guy Steele writes about the importance of learning three aspects of any language: grammar, vocabulary, and idioms. Unfortunately many programmers stop learning after mastering the first two. Effective Java is your guide to understanding idiomatic Java programming.
Effective Java is organized into 78 standalone “items,” all of which will be impossible to cover in one session. Instead I've chosen a subset of the most important techniques and practices that are commonly missed by today's Java programmers. You'll pick from a menu and decide where we'll head. Regardless of the path we take, you'll leave this session thoroughly equipped to write better Java code tomorrow!
The Gradle build tool is one of the most successful projects in the Groovy ecosystem because it addresses a difficult problem – every major build is a custom build. Gradle builds are written in Groovy, so the full power of the language is available if you need it. Gradle supports Maven project structure and repositories and uses Ivy dependency management without being bound by their normal constraints. With major systems like Grails, Hibernate, and the Spring Framework moving to Gradle, this is a technology worth taking the time to understand.
This talk will cover the basics of Gradle both through simple examples and by examining the build files for major open source projects.
In this session we will look at some JavaScript patterns, and how you can use them within your code.
No longer can you think of JavaScript as a kiddie-scripting language. JavaScript has now been promoted to a first-class citizen within your application - and with this, comes the need to think of better abstractions, code re-use - even thinking of establishing a common vocabulary to discuss approaches to writing better, cleaner and ultimately more maintainable JavaScript code.
Feature requests are steadily pouring in, but the team cannot respond to them. They are paralyzed. The codebase on which the company has “bet the business” is simply too hard to change. It's your job to clean up the mess and get things rolling again. Where do you begin? Your first task is to get the lay of the land by applying a family of techniques we'll call “Code Archaeology.”
In this session we will learn how to systematically explore a codebase. We'll look at what tools are available to help us out, slinging some wicked shell-fu along the way. We'll look at “code islands” and “code bridges,” and how to construct a “map of the code.” We'll also examine the wisdom that thought leaders like Michael Feathers and Dave Thomas have leant to this subject.
Once we've gained a thorough understanding of what we have in front of us, we'll learn approaches for getting the system under test and refactoring so that we can start to pick up the pace and respond to user requirements without making a bigger mess. You'll leave this session well prepared to tackle the next “big ball of mud” that gets dumped on your desk.
Have you ever wished that your local development sandbox could look exactly like production, but you've got a mismatch between your local OS and your production OS? And what about the age old “it works on my machine” excuse that quite often stems from differences between developer sandboxes? Many have turned to virtualization, creating a machine image that can be passed around the team. But who manages the template? How do you keep things in sync?
In this session, we'll explore Vagrant (http://www.vagrantup.com), an open source tool that allows you to easily create and manage virtual development environments that can be provisioned on demand and “thrown away” when no longer needed.
Vagrant is most powerful when we think of it as a tool to enable various workflows that are useful to software development teams. In this talk, we'll walk through the following workflows and examine Vagrant's contributions:
Presentation on Akka. A set various tools to write concurrent, fault-tolerant applications using immutable data, asyncronous message passing using local and remote actors, software transactional memory, and supervised systems.
Akka is a middleware, but it is not your 1990s middleware. Akka is a set of various tools to write concurrent, fault-tolerant applications using immutable data, asyncronous message passing using local and remote actors, software transactional memory, and supervised systems. Akka is also part of the Typesafe stack, a stack that include the Play web framework and the Scala programming language. This Akka presentation will cover both Scala and Java style usage of Akka and give the audience a 30k view of how it comes together. While this presentation is not interactive, all demos will be available on github for those that want to “play” along with their laptops.
This presentation covers the Guava library developed by Google (http://code.google.com/p/guava-libraries/). Guava provides collection extensions to the Java Collection API and, along with this, a cornucopia of time-saving utilities that bring Java as close as possible to some of the more functional and dynamic language competitors like Scala, Ruby, and Clojure.
This presentation focuses on the following topics: how to make Predicates and Functions; how to use new collection constructs that make life easier, including MultiMap, BiMaps, and MultiSets; how to set up and use Guava preconditions; and how to create truly immutable collections, and more. All of this is done with Java.
Groovy isn't designed to replace Java – it just makes Java cleaner and easier to develop. This presentation will look at various tasks Java developers need to do and demonstrate ways Groovy can help.
Topics will include building and testing applications, accessing both relational and NoSQL databases, working with web services, and more.
Groovy has a very easy learning curve for Java developers, so many people become Groovy users without realizing all it can do. This presentation will examine features of Groovy that can make your life easier once you're past the initial adoption stage.
Examples will include closure coercion, mixins, simple runtime metaprogramming, operator overloading, drop and take, a tour through some of the overlooked methods in the Groovy JDK, and more.
Statistics is hot lately, due in part to the easy availability of large data sets and the successes of people like Nate Silver. These aren't your father's statistics, however. A quiet revolution has swept through the field, shifting it from traditional frequentist methods toward a more Bayesian approach. This talk will discuss Bayes' Theorem and show you how to do simple, back-of-the-envelope calculations to apply it to a wide variety of problems.
In addition, we'll also talk about common errors non-experts make when dealing with statistical conclusions, ranging from small sample size issues to the use of arbitrary endpoints to the problem of overfitting and more.
This is a revised and updated version of the previous talk, with current thinking from practice and the literature. The talk presents why conflicts with your manager are inevitable based on differences in priorities and perspectives, and how to plan for them. The goal is to show you how to build the loyalty relationship that allows you to get what you need when you need it.
Topics covered will include diagnosing communication styles, lessons from game theory, working within the organizational hierarchy, and lessons on how to build a relationship with your manager that still allows you the freedom to express yourself and what you really want.
Time is very precious and is often threatened by phone calls, emails, co-workers, bosses, and most of all, yourself. The Pomodoro Technique reigns in unfocused time and gives your work the urgency and the attention it needs, and it's done with a kitchen timer.
In this presentation we discuss how to set up, estimate time, log time, deal with interruptions, and integrate with Agile as a team. We discuss timer software and even some of the great health benefits of the Pomodoro Technique.
Have you looked into Scala? Scala is a new object-functional JVM language. It is statically typed and type inferred. It is multi-paradigm and supports both object oriented and functional programming. And it happens to be my favorite programming language.
If you are interested in Scala, how you are planning to learn Scala? You probably are going to pick up a book or two and follow through some examples. And hopefully some point down the line you will learn the language, its syntax and if you get excited enough maybe build large applications using it. But what if I tell you that there is a better path to enlightenment in order to learn Scala?
Scala Koans, a set of test cases that will teach you Scala language. The Scala koans will help the audience learn the language, syntax and the structure of the language through test cases. It will also teach the functional programming and object oriented features of the language. Since learning is guided by failing tests it allows developers to think and play with the language while they are learning.
Namaste,
For those planning to attend the Scala Koans…
Welcome to Scala Koans!
Scala Koans is an interactive session that puts the programming and learning in your hands. Therefore, a laptop is required by all participants. If you do not have a laptop, then perhaps you have a friend with a laptop, is so, well, that would work too. In order to participate in the Scala Koan endeavor, a few things are required:
The process of actually running the koans will be covered during the session. Unfortunately, Internet connectivity is sometimes a dicey affair and at times it can rain on our parade. To avoid having to wait for the install at the conference you can prepare for the koans before the conference! If you don't have the opportunity to do this, we will have either memory sticks or private networks at the conference.
If you want to get started with the set up:
Before attending the koans session, you may want to take the opportunity to load some Scala Plugins onto your favorite IDE and Editor. Below is a list of resources that you can use to enhance your environment so that you can enjoy Scala syntax highlighting and other helpful tools like refactoring, debugging and analysis.
Eclipse - The Eclipse has an IDE plugin for Scala called aptly scala-ide. All the information about the plugin can be found at http://scala-ide.org including an easy to follow along video located at http://scala-ide.org/docs/current-user-doc/gettingstarted/index.html. Scala-IDE is also available at the Eclipse Marketplace!
IntelliJ - IntelliJ has a Scala plugin that can be found by going to Settings -> Plugins, clicking on 'Browse Repositories' button and searching for the 'Scala' plugin on the left. Right click on the 'Scala' and choose 'Install'. IntelliJ will prompt you to restart the IDE, do so, and enjoy.
NetBeans - Currently, Github user 'dcaoyuan' hosts a NetBeans Scala plugin at the address: https://github.com/dcaoyuan/nbscala. I have not tried this out since the number of NetBeans users has shrunk in recent years. If you are an avid NetBeans user, and wish to try it, you can let me know the results during the session. There is additional information at: http://wiki.netbeans.org/Scala
Emacs - Github user 'aemoncannon' has created 'ENSIME' (ENhanced Scala Interaction Mode for Emacs) at the address and has a great following. https://github.com/aemoncannon/ensime with some documentation at http://aemoncannon.github.io/ensime.
VIM - For VIM users you can use https://github.com/derekwyatt/vim-scala as a VIM plugin that offers Scala color highlighting
That is it. Hope to see you soon.
Have you looked into Scala? Scala is a new object-functional JVM language. It is statically typed and type inferred. It is multi-paradigm and supports both object oriented and functional programming. And it happens to be my favorite programming language.
If you are interested in Scala, how you are planning to learn Scala? You probably are going to pick up a book or two and follow through some examples. And hopefully some point down the line you will learn the language, its syntax and if you get excited enough maybe build large applications using it. But what if I tell you that there is a better path to enlightenment in order to learn Scala?
Scala Koans, a set of test cases that will teach you Scala language. The Scala koans will help the audience learn the language, syntax and the structure of the language through test cases. It will also teach the functional programming and object oriented features of the language. Since learning is guided by failing tests it allows developers to think and play with the language while they are learning.
Namaste,
For those planning to attend the Scala Koans…
Welcome to Scala Koans!
Scala Koans is an interactive session that puts the programming and learning in your hands. Therefore, a laptop is required by all participants. If you do not have a laptop, then perhaps you have a friend with a laptop, is so, well, that would work too. In order to participate in the Scala Koan endeavor, a few things are required:
The process of actually running the koans will be covered during the session. Unfortunately, Internet connectivity is sometimes a dicey affair and at times it can rain on our parade. To avoid having to wait for the install at the conference you can prepare for the koans before the conference! If you don't have the opportunity to do this, we will have either memory sticks or private networks at the conference.
If you want to get started with the set up:
Before attending the koans session, you may want to take the opportunity to load some Scala Plugins onto your favorite IDE and Editor. Below is a list of resources that you can use to enhance your environment so that you can enjoy Scala syntax highlighting and other helpful tools like refactoring, debugging and analysis.
Eclipse - The Eclipse has an IDE plugin for Scala called aptly scala-ide. All the information about the plugin can be found at http://scala-ide.org including an easy to follow along video located at http://scala-ide.org/docs/current-user-doc/gettingstarted/index.html. Scala-IDE is also available at the Eclipse Marketplace!
IntelliJ - IntelliJ has a Scala plugin that can be found by going to Settings -> Plugins, clicking on 'Browse Repositories' button and searching for the 'Scala' plugin on the left. Right click on the 'Scala' and choose 'Install'. IntelliJ will prompt you to restart the IDE, do so, and enjoy.
NetBeans - Currently, Github user 'dcaoyuan' hosts a NetBeans Scala plugin at the address: https://github.com/dcaoyuan/nbscala. I have not tried this out since the number of NetBeans users has shrunk in recent years. If you are an avid NetBeans user, and wish to try it, you can let me know the results during the session. There is additional information at: http://wiki.netbeans.org/Scala
Emacs - Github user 'aemoncannon' has created 'ENSIME' (ENhanced Scala Interaction Mode for Emacs) at the address and has a great following. https://github.com/aemoncannon/ensime with some documentation at http://aemoncannon.github.io/ensime.
VIM - For VIM users you can use https://github.com/derekwyatt/vim-scala as a VIM plugin that offers Scala color highlighting
That is it. Hope to see you soon.
We all have to do that at some point, giving presentations, to a group of audience, in public or at work. You've seen presentations and most likely have given some. You've prepared well on the technical topic, yet you're nervous presenting it. You ask what should you do to prepare? How do you deliver a great presentation?
In this presentation we'll learn a few basic principles or commandments that can help improve our presentation skills and make it possible to deliver that great presentation we aspire for.
JavaScript is powerful, but it is important not to let that turn into a beast on our projects. In this workshop we will first learn some core features of JavaScript and techniques to tame this language. We will then explore ways to modularize it. Come prepared with your laptops to crank out some code and work along exercises.
Working with JavaScript, prototypal inheritance, node.js, server side javascript, modularization techniques.
Gradle is a compelling new build tool that incorporates the lessons learned from a decade of Ant and Maven. More than just a compromise between declarative and imperative build formats, or between convention and configuration, Gradle is a sophisticated software development platform that simple builds easy and complex, highly automated continuous software delivery pipelines possible to build. Using its extensible APIs and expressive DSL, you're equipped to build your next build.
Bring your laptop to this session for the following:
This session will move beyond the basics cover Android persistence mechanisms, accessing RESTful web services, and more. We'll look at shared preferences, basic file I/O, and the Sqlite database. We'll also show how to operate off of the UI thread to access data from remote servers.
The application will access JSON data on a remote server, parse the data, and update the user interface based on the response.
No matter where you slice software engineering:
The root cause of many, if not most problems, is the common absence of critical thinking in how we approach decision making. Instead of thinking critically about our engineering decisions, we often follow a Cargo Cult mentality or blindly follow the pronouncements of the Blowhard Jamboree. The end results all too often include suboptimal productivity, excessive spending, poor quality and cancelled projects.
When we think instead critically about a component of software engineering, we take it apart. We discard our presuppositions. We challenge tradition. We gather our own evidence. We question everything.
This talk will examine the pathologies associated with not thinking critically, including a tour of the antipatterns that can emerge from such a practice. We'll then walk through the concentric circles of the critical thinking process, including evidence evaluation, argument evaluation, and argument construction. You'll leave this session with a critical thinking framework which can be applied to software engineering as well as beyond.
You can program higher order functions in Groovy quite easily using closures. But the benefits of closures go far beyond that. Groovy has a variety of capabilities hidden in closures.
In this presentation, we will unlock that treasure and explore ways in which we can design applications using Groovy closures, to apply different design patterns, to create fluent interfaces, and even program asynchrony.
Part of creating rich applications is making them highly responsive. No one
wants to wait for a full page to load and display or watch the spinning
beach-ball. A few seconds delay is equivalent to eternity in user experience.
So the obvious answer is to turn to multithreading and asynchronous
programming. But we're talking about JavaScript here, which is largely known
for being single-threaded. If you've confronted creating responsive
applications in JavaScript this session is for you.
We will learn the concepts and the fundamentals of creating asynchronous apps in JavaScript and make use of libraries that will make our lives easier to create highly responsive applications.
Since writing the second edition of the O'Reilly JMS 1.1 book I've spent a lot of time speaking and writing about AMQP (Advanced Message Queuing Protocol). While I'm still excited about AMQP, I'm also excited about another area of messaging - the recently released JMS 2.0 specification. I'll begin this live coding session with a high-level summary of what has changed with the JMS 2.0 specification release. Then, through live coding I will provide you through a first-hand look at the new simplified API and the new features of the latest JMS specification that not only make it easier to develop messaging applications, but also make it more suitable for enterprise-level messaging. There is a lot happening in the messaging space right now - climb on board this session to see what all the buzz is about!
Agenda: