There is nothing better than looking at real-world examples to understand project failures and project successes. This session is intended to be an open conversation, based closely to a birds of a feature (BOF) session, however it will have a series of “that happened to me” topics throughout discussed from the perspective of technology.
Discussed will be a clients dating back from 2005. The actually client and there name will not be revealed, but the industry, the contraints and some of the outcomes will.
The first part of the Continuous Delivery workshop covers the differences between continuous integration, continuous deployment, and continuous delivery). It also introduces the deployment pipeline_, along with usage, patterns, and anti-patterns. This part concludes with some applied engineering principles.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
The first part of the workshop describes the technical differences between related topics such as continuous integration, continuous deployment, and continuous delivery. At the heart of the workshop is a pattern called the deployment pipeline, which involves the creation of a living system that models your organization's value stream for delivering software. I discuss the various stages, how triggering works, patterns and anti-patterns, and how to pragmatically determine what “production ready” means. This session also covers some agile principles espoused by the Continuous Delivery book, including new perspectives on things like developer workstations and configuration management.
Continuous Delivery relies on a variety of interlocking engineering practices to work efficiently; this session covers three related topics. First, I cover the role of testing and the testing quadrant. Second, I specifically cover version control usage and offer alternatives to feature branching like toggle and branch by abstraction. Third, I describe some incremental release strategies, along with their impact on other stages of project lifecycle.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
Continuous Delivery relies on a variety of interlocking engineering practices to work efficiently; this session covers three related topics. First, I cover the role of testing and the testing quadrant, including the audience and engineering practices around different types of tests. I also cover some best practices around testing, including testing ratios, code coverage, and other topics. Second, I specifically cover version control usage and offer alternatives to feature branching like toggle and branch by abstraction. Generally, I talk about building synergistic engineering practices that complement rather than conflict one another. In particular, I discuss why feature branching harms three other engineering practices and describe alternatives. Third, I describe some incremental release strategies, along with their impact on other stages of project lifecycle.
Two big stumbling blocks for Continuous Delivery adaptation are interactions with operations and the keepers of data. First in this session, I cover operations, DevOps, and programmatic control of infrastructure. Second, I discuss how to incorporate databases and DBA's into the Continuous Integration and Continuous Delivery process.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
Two big stumbling blocks for Continuous Delivery adaptation are interactions with operations and the keepers of data. First in this session, I cover operations, DevOps, and programmatic control of infrastructure using tools like Puppet and Chef. I also discuss the explosion of tool alternatives in this space, and cover some current-day best practices. Second, I discuss how to incorporate databases and DBA's into the Continuous Integration and Continuous Delivery process. This includes database migrations, strategies for enhancing collaboration between application development and data, and database refactoring techniques.
This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
Continuous Delivery is a process for automating the production
readiness of your application every time a change occurs – to code,
infrastructure, or configuration. In the Continuous Delivery world,
rather than hone skills at predicting the future via Big Design Up
Front, Continuous Delivery emphasizes techniques for understanding
and changing code with less cost during the process. Some architectures
and engineering practices yield better designs for this
environment. This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
A Technology Radar is a tool that forces you to organize and think about near term future technology decisions, both for you and your company. This talk discusses using the radar for personal breadth development, architectural guidance, and governance.
ThoughtWorks Technical Advisory Board creates a “technology radar” twice a year, a working document that helps the company make decisions about interesting technologies and where we spend our time. ThoughtWorks then started conducting radar-building exercises for our clients, which provides a great medium for technologists company-wide to express their opinions about the technologies they use every day. For companies, creating a radar helps you document your technology decisions in a standard format, evaluate technology decisions in an actionable way, and create cross-silo discussions about suitable technology choices. This session describes the radar visualization and how to conduct a radar building session for yourself. After a brief introduction, the bulk of the workshop consists of attendees building a radar for the group, following the same procedure you'll use when you do this exercise at your company. At the end, we'll have created a unique Radar for this event and practiced doing it for yourself.
In this session, we will take a look at Angular - a new MVC framework by Google. We will discuss some of the terminology that Angular offers, and see how we can use that to develop highly interactive, dynamic web applications. See “Detail” for a list of topics I cover and the Github repo URL
This is an intro-level talk we will take a look at Angular and developing rich web applications. Angular embraces HTML and CSS, allowing you to extend HTML towards your application, and uses plain JavaScript which makes your code easy to reuse, and test.
Note: This is an intro level talk. It is targeted towards developers who are curious about Angular and want to learn about the fundamental features and concepts in Angular.
Topics Covered -
ng-app
ng-init
and the evaluation {{ }}
directive$rootScope
ng-model
$scope
)ng-repeat
ng-form
, form validation and submission in AngularJS$http
GitHub URL - https://github.com/looselytyped/angudone-backend/tree/solutions
In this session we will look at some JavaScript patterns, and how you can use them within your code.
No longer can you think of JavaScript as a kiddie-scripting language. JavaScript has now been promoted to a first-class citizen within your application - and with this, comes the need to think of better abstractions, code re-use - even thinking of establishing a common vocabulary to discuss approaches to writing better, cleaner and ultimately more maintainable JavaScript code.
JavaScript will celebrate it's 19th birthday in 2014. For a language that has been around for such a while it has seen very few, if any changes to the language itself. Well all that is about to change with ECMAScript.next (or ECMAScript 6). ECMAScript 6 modernizes JavaScript syntax, while bringing in features such as modules for better namespacing, class as a first class construct, and a variety of additional operators thus ensuring that JavaScript is ready for the next era of large scale modern web applications.
In this session we will take a look at some of the features that ECMAScript 6 brings to the table and see what kind of browser support is available for it.
Git, at it's core, leverages a relatively simple data structure to maintain history. In this session we will take a look at this data-structure, which in turn will give us a better view of how Git manages history, and how better to work with it. NOTE: This is NOT an introduction to Git. This session assumes familiarity with Git concepts such as init, add, commit and merge.
Git has fast emerged as one of the leaders in DVCS. Git may seem arcane, but under the covers, leverages a very simple data-structure to store your version history. As developers, it has always serves us well to know how things fundamentally work, and Git is no different. In this talk we will explore this data-structure, and how the various commands you invoke against Git mutate it.
Those who may have used Git for a project or two will have realized the powerful potential of Git. But after a while we often wonder if there's more to it than meets the eye. And indeed there is! In this session we will look at some more advanced concepts of Git.
In this session we will explore some features of Git such as the reflog, and the stash. We will play with some advanced options for regular commands like rebase, and finally see some common workflows that can aid us in our day to day interaction with Git.
This is a revised and updated version of the previous talk, with current thinking from practice and the literature. The talk presents why conflicts with your manager are inevitable based on differences in priorities and perspectives, and how to plan for them. The goal is to show you how to build the loyalty relationship that allows you to get what you need when you need it.
Topics covered will include diagnosing communication styles, lessons from game theory, working within the organizational hierarchy, and lessons on how to build a relationship with your manager that still allows you the freedom to express yourself and what you really want.
It was over 10 years ago that Spring hit the scene and made a big impact in the enterprise Java development ecosystem. Now that Spring 4.2 is available (and Spring 5 on the way), there's a lot of new features and a lot that you may not know about yet.
Whether you're already working with Spring 4 or are anxious to make a move up, there's plenty of new tricks Spring has in store for you. We'll explore them all in this overview of everything that's new in Spring.
In this presentation, we'll see how to use Spring to create, secure, streamline, hyperlink, and consume REST APIs.
In modern applications, there are a diverse array of clients consuming content from the web. Each of these clients has unique capabilities and limitations, therefore demanding presentation of the application to be tailored to each device. As a result, presentation logic is often pushed into the client itself, leaving the application to serve a common data-oriented lightweight API to be consumed by each client.
In this session, we'll see how to build real Spring applications using Spring Boot. We'll also look under the covers to see what makes Spring Boot tick.
Spring offers a number of configuration options: XML configuration, Java configuration, and Groovy configuration to name a few. To some degree, component-scanning and autowiring help eliminate some explicit configuration. But in general most Spring applications require some essential “bootstrap” configuration to enable key functionality. What's the right way to build Spring applications when there are so many choices?
What if I told you that configuration was optional?
Spring Boot is an exciting new programming model for Spring that makes it extremely easy to create stand-alone, production-ready Spring applications. Rather than writing lots of code to satisfy the needs of a framework, Spring Boot helps you focus your coding efforts on your application. Spring Boot takes an opinionated approach to configuring Spring, making it possible to create Spring applications with little or, in some cases, no Spring configuration at all!
In modern applications, Javascript is increasingly prevalent both on the client-side and to some degree on the server-side. As we continue to crank out more Javascript code, we're finding that many of the same hard-lessons we learned in writing decoupled Java code are equally desirable in Javascript code. Without the benefit of dependency injection and AOP, both Java and Javascript code can quickly become an unnavigable and untestable mess.
Where frameworks like Spring have helped us gain control over our Java code, Cujo.js similarly aims to give our Javascript code more structure and testability.
In this session, we'll look at Cujo.js, an “unframework” that provides dependency injection that takes Javascript's unique needs into consideration to create loosely-coupled code. We'll also see how, although Cujo.js isn't strictly a UI framework, elements of Cujo.js can be brought together to elegantly build client-side UIs.
Find yourself overwhelmed with hundreds of to-dos? Is your hard-drive littered with dozens of killer ideas that you started with enthusiasm and then just fizzled away? Do you feel like you are moving as fast as can but only getting to the wrong place quicker? Well perhaps this session will help.
There are various techniques and strategies available to us today that aim to help with exactly this conundrum - from Getting Things Done ™ to Personal Kanban. Unfortunately it is often easy to be extremely productive using these systems, but not very effective. After all, it's not about getting things done, but getting the RIGHT things done. In this talk we will discuss not only how to get things done, but also attempt to figure out what it is you actually need to be doing.
In this session, I will attempt to show you how you can leverage various strategies to be more effective, knock to-dos out and have fun while doing it. If time permits we will close with an overview of the tools that are available to you, and how you can use these to become a to-do list ninja :)
Almost every example of an agile project involves a single team and while many successful projects are delivered that way, most enterprise software requires the interaction of several teams. But how do we scale agile beyond a single team? What practices translate and which ones don't? In this talk we'll discuss some of the issues you'll encounter as you move agile beyond a single group and how you can keep multiple stakeholders happy. While it isn't as simple as having a “scrum of scrums” it isn't as hard as replacing every line of COBOL.
Almost every example of an agile project involves a single team and while many successful projects are delivered that way, most enterprise software requires the interaction of several teams. But how do we scale agile beyond a single team? What practices translate and which ones don't? In this talk we'll discuss some of the issues you'll encounter as you move agile beyond a single group and how you can keep multiple stakeholders happy. While it isn't as simple as having a “scrum of scrums” it isn't as hard as replacing every line of COBOL.
You understand the Web. Why do you do REST so wrong?
Doing REST “right” isn't a matter of conformance or purity. It isn't about pleasing hard-to-please Restafarian personalities. It's about understanding that WYBIWYG (What you Build is What You Get). Decisions have consequences. You are free to design and implement whatever you like, but you need to understand the consequences of your choices.
REST, as defined by Roy Fielding's thesis, is a collection of architectural constraints designed to yield certain properties in deployed systems. When you take shortcuts, you simply will not receive all the benefits of loose-coupling, evolvable, flexible, scalable systems. URLs are not enough. HTTP is not enough. To fully embrace the world of REST, you must understand Hypermedia. The good news is that you already do, you've apparently just forgotten.
Here's the thing though. REST is not an endpoint, it is just a beginning. Come to this talk to hear how the story starts. We will take a deep dive into why the Web works, the implications for building Hypermedia-driven REST APIs and start to look at what this means in practice.
The cost of integrating information isn't cheap. Well, at least it isn't if you do it wrong. Chances are, you're doing it wrong.
The single most difficult aspect of data integration is the effort to achieve consensus. It isn't just that we are disagreeable people. It's also that it is a fantasy that there is a “common model” or a “global truth”. Different groups and individuals see the world differently and have different needs from information systems. Language, and therefore what we call things, isn't simply reflective of reality. It plays a constructive and interpretive role.
The problem is that our technologies force us to make choices about world views and pretend that things aren't changing constantly. This yields fragile systems and high impedance to change that cascades through our organizations. This translates to expensive, rigid and difficult to extend failure to give the business what they want.
Our friend Tim Berners-Lee and his Happy W3C Merrymakers have given us a set of technologies to help us solve these problems though. We forget that the Web he designed was not the public Web, but one to solve integration needs for complex organizations like CERN. The HTML bit that we have gotten so excited about is but a small part of the vision. We will introduce RDF and SPARQL as enabling technologies. They do not necessarily replace what you already have, but they do make it possible to share information with people you've never talked to: Collaboration without Coordination.
Webs of documents are fabulous enough. Webs of data will blow your mind.
A table is a fixed structure. A tree is as well. A graph can go on forever and be extended at any time by anyone.
The Web is an unbounded graph. It is our definition of scale. What happens when we start to use it as the basis of sharing information, not just documents. This does not necessarily mean the public Web (although it is certainly appropriate for that as well). It simply means thinking of information as a web of linked entities through discoverable relationships.
Linked data is a way of doing this, but it is also an established project connecting billions of entities from disparate, unrelated sources. How does that even work? What can you do with such a thing? And what does that mean for your organization?
By building on the ideas introduced in the Data Integration talk, we will explore how webs of data built on standards can change everything.
“Computers are useless. They can only give you answers.” –Picasso
If everything is explicitly described in uniform, robust and comprehensive data models, it is easy to imagine how to ask questions of the data. The social, technical and financial costs of getting the information into that state, however, will keep that from ever being a reality. So, how can we integrate and reason over choppy and sloppy data from multiple sources in a variety of formats? We will see how the RDFS and Web Ontology Language (OWL) W3C standards help us connect and reason over content that leaves things unsaid without writing a bunch of custom code.
This is not artificial intelligence, but it is influenced by work done in that field.
The surge of interest in the REpresentational State Transfer (REST) architectural style, the Semantic Web, and Linked Data has resulted in the development of innovative, flexible, and powerful systems that embrace one or more of these compatible technologies. However, most developers, architects, Information Technology managers, and platform owners have only been exposed to the basics of resource-oriented architectures.
This talk, based upon Brian Sletten's book of the same name, is an attempt to catalog and elucidate several reusable solutions that have been seen in the wild in the now increasingly familiar “patterns” style. These are not turn key implementations, but rather, useful strategies for solving certain problems in the development of modern, resource-oriented systems, both on the public Web and within an organization's firewalls.
So your server is having issues? memory? Connections? Limited response? Is the first solution to bounce the server? Perhaps change some VM flags or add some logging? In todays Java 6 world, with its superior runtime monitoring and management capabilities the reasons to the bounce the server have been greatly reduced.
This session will look at the Java monitoring and management capabilities, which includes the ability to make VM argument changes on the fly. This session will dive into the different memory compartments and how they are used by the JVM. Final this session will explore the different GC options and how they affect response times and throughput.
cloud architecture… an architectural walk through cloud services and components
high level
Data Centers / VDC
CDN
Monitoring
Load Balancing
Queue
Storage (s3, etc)
DNS
Search
Routing
(Amazon + Open Source) example: CloudSearch vs ElasticSearch
Security
low level
HAProxy
Nerve
synapse
queues
compute
dns
cloud architecture… an architectural walk through cloud services and components
high level
Data Centers / VDC
CDN
Monitoring
Load Balancing
Queue
Storage (s3, etc)
DNS
Search
Routing
(Amazon + Open Source) example: CloudSearch vs ElasticSearch
Security
low level
HAProxy
Nerve
synapse
queues
compute
dns
Learn how to use the Spock testing framework in both Java and Groovy applications. This half-day workshop will demonstrate testing with both Spock and JUnit together for Java projects, Groovy projects, and projects that combine both technologies.
Topics will include writing specifications, using Spock mocks, and adding Spock enhancements.
Learn how to use the Spock testing framework in both Java and Groovy applications. This half-day workshop will demonstrate testing with both Spock and JUnit together for Java projects, Groovy projects, and projects that combine both technologies.
Topics will include writing specifications, using Spock mocks, and adding Spock enhancements.
The basics of developing for the Android platform will be explored, from setting up the SDK to using the Android Studio IDE and the generated Gradle build files. No previous experience is required, other than a basic knowledge of Java.
After discussing how Android fits into the marketplace, we'll look at creating applications, how to use activities, and working with layouts.
Building on the the previous talk, we'll add intents, customized layouts for alternative configurations, talk about the activity lifecycle, use logging, and more.
We'll deploy to both emulators and connected devices, and change input styles.
This session will move beyond the basics cover Android persistence mechanisms, accessing RESTful web services, and more. We'll look at shared preferences, basic file I/O, and the Sqlite database. We'll also show how to operate off of the UI thread to access data from remote servers.
The application will access JSON data on a remote server, parse the data, and update the user interface based on the response.
The software industry changes rapidly, but you can protect yourself
from these changes by creating code that is complicated enough that
only you can maintain it.
Of course you should not engage in obvious bad practices. The good
news is that you don't have to. You can follow idiomatic industry
practice and stay buzzword compliant with the latest trends, while
quietly spreading complexity throughout systems. Better yet, the
symptoms will show up not in your own code, but in other code that
uses your code, directly or indirectly. You will be a hero as you
lead larger and larger teams burning the midnight oil to keep systems
alive.
Practice these principles, and your code will have an
infectious complexity that guarantees you will always be needed to
maintain it.
The key to understanding Clojure is ideas, not language constructs.
In this talk, we will approach Clojure via 10 Big Ideas.
Each of these ideas is valuable and useful a la carte, not necessarily
only in a Clojure together. Taken together, they beging to fill in the
picture of why Clojure is changing the way many programmers think
about software development.
Many software developers point their career towards ascending to the gilded rank of Architect…but what does it mean to actually be one? While many of us labor under false pretense of abject technical decision making, the reality is often very different. You'll code less, spending more time on activities that lack an objective green/red bar. But you'll also an opportunity to impact far more than one project.
In this talk, I'll speak to my own journey. We'll touch on influencing coworkers, the importance of communication and the importance of cup of coffee.
Many software developers point their career towards ascending to the gilded rank of Architect…but what does it mean to actually be one? While many of us labor under false pretense of abject technical decision making, the reality is often very different. You'll code less, spending more time on activities that lack an objective green/red bar. But you'll also an opportunity to impact far more than one project.
In this talk, I'll speak to my own journey. We'll touch on influencing coworkers, the importance of communication and the importance of cup of coffee.
Back in the day, web developers had to rely on their wits and a plethora of alert statements - to say our toolkit was spartan would be an understatement. But with the increased importance of web front ends and the rise of JavaScript MVC frameworks, a modern web developer toolkit is finally emerging. We've evolved from text editors to full fledged IDE's with code completion and refactoring tools but our toolchain doesn't end there. With multiple testing libraries, mocking frameworks, test drivers and even code coverage tools, today's web developer gets to walk downhill on a sunny day.
In this workshop, we'll discuss the various tools that you can assemble into your own full fledged JavaScript development pipeline from code to deployment and more importantly, get them up and running on your machine.
Back in the day, web developers had to rely on their wits and a plethora of alert statements - to say our toolkit was spartan would be an understatement. But with the increased importance of web front ends and the rise of JavaScript MVC frameworks, a modern web developer toolkit is finally emerging. We've evolved from text editors to full fledged IDE's with code completion and refactoring tools but our toolchain doesn't end there. With multiple testing libraries, mocking frameworks, test drivers and even code coverage tools, today's web developer gets to walk downhill on a sunny day.
In this workshop, we'll discuss the various tools that you can assemble into your own full fledged JavaScript development pipeline from code to deployment and more importantly, get them up and running on your machine.
For this workshop you should have:
If you are on a Windows machine, you may want to use Console2, PowerShell, Cygwin or Babun.
Before the workshop, take a minute to setup the code and the instructions somewhere on your laptop. The code is on GitHub:
https://github.com/ntschutta/bulletproof-js
The instructions are also on GitHub:
https://github.com/ntschutta/bulletproof-js-workshop
You can clone the repo from there or simply download a zip if you prefer. That's it! See you in Florida!
Back in the day, web developers had to rely on their wits and a plethora of alert statements - to say our toolkit was spartan would be an understatement. But with the increased importance of web front ends and the rise of JavaScript MVC frameworks, a modern web developer toolkit is finally emerging. We've evolved from text editors to full fledged IDE's with code completion and refactoring tools but our toolchain doesn't end there. With multiple testing libraries, mocking frameworks, test drivers and even code coverage tools, today's web developer gets to walk downhill on a sunny day.
In this workshop, we'll discuss the various tools that you can assemble into your own full fledged JavaScript development pipeline from code to deployment and more importantly, get them up and running on your machine.
Back in the day, web developers had to rely on their wits and a plethora of alert statements - to say our toolkit was spartan would be an understatement. But with the increased importance of web front ends and the rise of JavaScript MVC frameworks, a modern web developer toolkit is finally emerging. We've evolved from text editors to full fledged IDE's with code completion and refactoring tools but our toolchain doesn't end there. With multiple testing libraries, mocking frameworks, test drivers and even code coverage tools, today's web developer gets to walk downhill on a sunny day.
In this workshop, we'll discuss the various tools that you can assemble into your own full fledged JavaScript development pipeline from code to deployment and more importantly, get them up and running on your machine.
For this workshop you should have:
If you are on a Windows machine, you may want to use Console2, PowerShell, Cygwin or Babun.
Before the workshop, take a minute to setup the code and the instructions somewhere on your laptop. The code is on GitHub:
https://github.com/ntschutta/bulletproof-js
The instructions are also on GitHub:
https://github.com/ntschutta/bulletproof-js-workshop
You can clone the repo from there or simply download a zip if you prefer. That's it! See you in Florida!
Developers are flocking to client side frameworks and, as a result, there are more and more JavaScript libraries attempting to solve the rich internet application problem. In a space where new libraries seem to spring up weekly, what framework should you choose for your next project? While there is consensus around basic ideas like Model View Whatever, there are some strong philosophical differences amongst the various libraries. In this talk, we'll look at the similarities and the differences of some emerging JavaScript libraries discussing why you need to be aware of this rapidly evolving aspect of software development.
Developers are flocking to client side frameworks and, as a result, there are more and more JavaScript libraries attempting to solve the rich internet application problem. In a space where new libraries seem to spring up weekly, what framework should you choose for your next project? While there is consensus around basic ideas like Model View Whatever, there are some strong philosophical differences amongst the various libraries. In this talk, we'll look at the similarities and the differences of some emerging JavaScript libraries discussing why you need to be aware of this rapidly evolving aspect of software development.
Simulation allows a rigorous, scalable, and reproducible approach to testing. The separation of concerns, and the use of a versioned, time-aware database, give simulation great power. This talk will introduce simulation testing, walking through a complete example using Simulant, an open-source simulation library.
Simulation allows a rigorous, scalable, and reproducible approach to testing:
Simulation begins with statistical models of the use of your system. This model includes facts such as “we have identified four customer profiles, each with different browsing and purchasing patterns” or “the analytics query for the management report must run every Wednesday afternoon.” Models are versioned and kept in a database.
The statistical models are used to create activity streams. Each agent in the system represents a human user or external process interacting with the system, and has its own timestamped stream of interactions. With a large number of agents, simulations can produce the highly concurrent activity expected in a large production system.
Agents are scaled across as many machines as are necessary to both handle the simulation load, and give access to the system under test. The simulator coordinates time, playing through the activity streams for all the agents.
Every step of the simulation process, including modeling, activity stream generation, execution, and the code itself, is captured and stored in a database for further analysis. You will typically also capture whatever logs and metrics your system produces.
Since all phases of a simulation are kept in a database, validation can be performed at any time. This differs markedly from many approaches to testing, which require in-the-moment validation against the live system.
The separation of concerns above, and the use of a versioned, time-aware database, gives simulation great power. Imagine that you get a bug report from the field, and you realize that the bug corresponds to a corner case that you failed to consider. With a simulation-based approach, you can write a new validation for the corner case, and run that validation against your past simulation results, without ever running your actual system.
This talk will introduce simulation testing, walking through a complete example using Simulant, an open-source simulation library.
Traditional automated testing approches combine input generation, execution, output capture, and validation inside the bodies of single functions. Generative testing approaches gain expressive power by isolating these steps.
With generative testing:
There are a number of benefits to this approach:
This talk introduces test data generation and generative testing, using for its examples the data.generators and test.generative libraries developed by the author.
Groovy isn't designed to replace Java – it just makes Java cleaner and easier to develop. This presentation will look at various tasks Java developers need to do and demonstrate ways Groovy can help.
Topics will include building and testing applications, accessing both relational and NoSQL databases, working with web services, and more.
Groovy has a very easy learning curve for Java developers, so many people become Groovy users without realizing all it can do. This presentation will examine features of Groovy that can make your life easier once you're past the initial adoption stage.
Examples will include closure coercion, mixins, simple runtime metaprogramming, operator overloading, drop and take, a tour through some of the overlooked methods in the Groovy JDK, and more.
Statistics is hot lately, due in part to the easy availability of large data sets and the successes of people like Nate Silver. These aren't your father's statistics, however. A quiet revolution has swept through the field, shifting it from traditional frequentist methods toward a more Bayesian approach. This talk will discuss Bayes' Theorem and show you how to do simple, back-of-the-envelope calculations to apply it to a wide variety of problems.
In addition, we'll also talk about common errors non-experts make when dealing with statistical conclusions, ranging from small sample size issues to the use of arbitrary endpoints to the problem of overfitting and more.
Introduction to Go
Introduction to Go… language of the cloud
As a web application developer, most of the focus is on the user stories and producing business value for your company or clients. Increasingly however the world wide web is more like the wild wild web which is an increasingly hostile environment for web applications. It is absolutely necessary for web application teams to have security knowledge, a security model and to leverage proper security tools.
This training workshop on security will provide an overview of the security landscape starting with the OWASP top ten security concerns with current real world examples of each of these attack vectors. The first session will consist of a demonstration and labs using hacker tools to get an understanding of how a hacker thinks. It will include a walk through of the ESAPI toolkit as an example of how to solve a number of these security concerns including hands-on labs using the OWASP example swingset.
The workshop will include several hands on labs from the webgoat project in order to better understand the threats that are ever so common today.
Attendees will come away with the following skills / capabilities:
Don't be the weakest link on the web!
As a web application developer, most of the focus is on the user stories and producing business value for your company or clients. Increasingly however the world wide web is more like the wild wild web which is an increasingly hostile environment for web applications. It is absolutely necessary for web application teams to have security knowledge, a security model and to leverage proper security tools.
This training workshop on security will provide an overview of the security landscape starting with the OWASP top ten security concerns with current real world examples of each of these attack vectors. The first session will consist of a demonstration and labs using hacker tools to get an understanding of how a hacker thinks. It will include a walk through of the ESAPI toolkit as an example of how to solve a number of these security concerns including hands-on labs using the OWASP example swingset.
The workshop will include several hands on labs from the webgoat project in order to better understand the threats that are ever so common today.
Attendees will come away with the following skills / capabilities:
Don't be the weakest link on the web!
Gradle has fast become one of the de-facto build tool in the Java ecosystem. Gradle offers a powerful DSL to configure your builds. Whether you have a simple build, or a complex build with many moving parts, Gradle's DSL and extensible API can help you make your builds easier, and possible.
In this sessio we will start from the ground up. We will write our first Gradle script and examine Gradle's configuration vs. execution phases. We will see how Gradle uses plugins to add on functionality for free to your build and end with a look at the Gradle Plugin API to see how easy it is to write your own plugins for better reuse.
For decades object-oriented programming has been sold (perhaps over sold) as the logical programming paradigm which provides “the way” to software reuse and reductions in the cost of software maintenance as if it comes for free with the simple selection of the an OO language. Even with the renewed interests in functional languages, the majority of development shops are predominately using object-oriented languages such as Java, C#, and Ruby. So most likely you are using an OO language… How is that reuse thing going? Is your organization realizing all the promises? Even as a former Rational Instructor of OOAD and a long time practitioner, I find great value in returning to the basics. This session is a return to object-oriented basics.
This session is intended to balance the often-touted theoretical object-oriented practices with lessons from the real world. The session will start with a review of some of the basics regarding abstractions and encapsulation. Although simple concepts, we will push the boundary of how these techniques are applied. We will discuss the difference between analysis and design and how that is reflected in our code. We will also look at the limitations of Java the language as outlined in Josh Block’s book “Effective Java”. The session will go past the basics of object-oriented principles and into what our true goals of development really are.
There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How's the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?“
David Foster Wallace, from This is Water
A fish is swimming along, having some breakfast, when suddenly he's snatched out of his world by his food, abruptly ejected into a bright world where it's hard to breathe, landing on the bottom of a boat, where strange alien creatures make strange sounds. Overhead, he sees an airplane flying at 500 mph. Suddenly, one of the creatures picks him up, removes the hook, and, just as suddenly, he's back home. Yet, when he regales his friends with this tale, no one can believe such a strange world could exist.
This keynote describes the water you swim in but cannot see anymore, like relational databases and application servers. And, it jerks you out of this water (briefly) to describe a strange, fantastical world with things like immutable database servers and phoenix machines. You may have trouble getting your friends who didn't attend to understand.
Jamie Zawinski once said “Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.“. Many consider regular expressions to be indecipherable, but the truth is that every programmer should consider regular expressions an integral part of their toolkit. From the command line to your favorite text editor, from parsing user input to scraping HTML pages - once you know regular expressions you will find a use for them in almost every programming context.
In this session we will attempt to unriddle the mystery that regular expressions pose. We will start at the basics and work our way towards more complex expressions.
Becoming more agile
Becoming more agile
winning with agile by analyzing the failings of others
winning with agile by analyzing the failings of others
In this session we will look to see how we can refactor our learning - what tools, and methodologies can we use to help us learn quicker and better - how we can create a store that gives us quick access to information when we really need it.
We all work in an industry in which not only do the tools that we use change ever few years, but one in which we have to shift the very paradigms these tools are built on! Even the most trivial of projects entails tens of different toolkits, frameworks, and languages coming together, and somehow we need to know how to leverage each one. How does one keep up? Despite all our years in schools, and our in-born nature to learn, we often are never taught how to learn. How can we learn faster, and retain even more?
In this session we will take a look at various tools and techniques available to us and see how we can make our learning effective.