The first part of the Continuous Delivery workshop covers the differences between continuous integration, continuous deployment, and continuous delivery). It also introduces the deployment pipeline_, along with usage, patterns, and anti-patterns. This part concludes with some applied engineering principles.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
The first part of the workshop describes the technical differences between related topics such as continuous integration, continuous deployment, and continuous delivery. At the heart of the workshop is a pattern called the deployment pipeline, which involves the creation of a living system that models your organization's value stream for delivering software. I discuss the various stages, how triggering works, patterns and anti-patterns, and how to pragmatically determine what “production ready” means. This session also covers some agile principles espoused by the Continuous Delivery book, including new perspectives on things like developer workstations and configuration management.
Continuous Delivery relies on a variety of interlocking engineering practices to work efficiently; this session covers three related topics. First, I cover the role of testing and the testing quadrant. Second, I specifically cover version control usage and offer alternatives to feature branching like toggle and branch by abstraction. Third, I describe some incremental release strategies, along with their impact on other stages of project lifecycle.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
Continuous Delivery relies on a variety of interlocking engineering practices to work efficiently; this session covers three related topics. First, I cover the role of testing and the testing quadrant, including the audience and engineering practices around different types of tests. I also cover some best practices around testing, including testing ratios, code coverage, and other topics. Second, I specifically cover version control usage and offer alternatives to feature branching like toggle and branch by abstraction. Generally, I talk about building synergistic engineering practices that complement rather than conflict one another. In particular, I discuss why feature branching harms three other engineering practices and describe alternatives. Third, I describe some incremental release strategies, along with their impact on other stages of project lifecycle.
Two big stumbling blocks for Continuous Delivery adaptation are interactions with operations and the keepers of data. First in this session, I cover operations, DevOps, and programmatic control of infrastructure. Second, I discuss how to incorporate databases and DBA's into the Continuous Integration and Continuous Delivery process.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
Two big stumbling blocks for Continuous Delivery adaptation are interactions with operations and the keepers of data. First in this session, I cover operations, DevOps, and programmatic control of infrastructure using tools like Puppet and Chef. I also discuss the explosion of tool alternatives in this space, and cover some current-day best practices. Second, I discuss how to incorporate databases and DBA's into the Continuous Integration and Continuous Delivery process. This includes database migrations, strategies for enhancing collaboration between application development and data, and database refactoring techniques.
We make many assumptions when we develop our applications. Many of these assumptions no longer hold true when we start to build applications for the cloud. Cloud platforms also introduce architectural possibilities that do not exist in traditional deployment settings. This session will examine five architectural patterns that we can apply to our applications in order to prepare them for the unique characteristics of cloud environments.
We'll cover the following patterns:
Examples will focus on the application of these patterns using Java/Spring and Cloud Foundry-based PaaS platforms, but should be applicable to any language/framework/PaaS platform combination.
Even with the recent explosion in alternative languages for the JVM, the vast majority of us are still writing code in “Java the language” in order to put bread on the table. Proper craftsmanship demands that we write the best Java code that we can possibly write. Fortunately we have a guide in Joshua Bloch's Effective Java.
In his foreward to the first edition, Guy Steele writes about the importance of learning three aspects of any language: grammar, vocabulary, and idioms. Unfortunately many programmers stop learning after mastering the first two. Effective Java is your guide to understanding idiomatic Java programming.
Effective Java is organized into 78 standalone “items,” all of which will be impossible to cover in one session. Instead I've chosen a subset of the most important techniques and practices that are commonly missed by today's Java programmers. You'll pick from a menu and decide where we'll head. Regardless of the path we take, you'll leave this session thoroughly equipped to write better Java code tomorrow!
“Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.” Docker creates containers that provide running process with:
It does this by leveraging low-level Linux kernel primitives like cgroups and namepaces. The end result is a portable application container that can run anywhere Docker can run, including on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.
Containers are an excellent way to package your application such that it can run consistently everywhere you want to run it, a fantastic step toward Continuous Delivery. In this session we'll look at how to use Docker to package, deploy, and run Java applications and other services. We'll also compare Docker to another container solution, Warden, which is a key component of the Cloud Foundry PaaS.
In this session, we will take a look at Angular - a new MVC framework by Google. We will discuss some of the terminology that Angular offers, and see how we can use that to develop highly interactive, dynamic web applications. See “Detail” for a list of topics I cover and the Github repo URL
This is an intro-level talk we will take a look at Angular and developing rich web applications. Angular embraces HTML and CSS, allowing you to extend HTML towards your application, and uses plain JavaScript which makes your code easy to reuse, and test.
Note: This is an intro level talk. It is targeted towards developers who are curious about Angular and want to learn about the fundamental features and concepts in Angular.
Topics Covered -
ng-app
ng-init
and the evaluation {{ }}
directive$rootScope
ng-model
$scope
)ng-repeat
ng-form
, form validation and submission in AngularJS$http
GitHub URL - https://github.com/looselytyped/angudone-backend/tree/solutions
JavaScript will celebrate it's 19th birthday in 2014. For a language that has been around for such a while it has seen very few, if any changes to the language itself. Well all that is about to change with ECMAScript.next (or ECMAScript 6). ECMAScript 6 modernizes JavaScript syntax, while bringing in features such as modules for better namespacing, class as a first class construct, and a variety of additional operators thus ensuring that JavaScript is ready for the next era of large scale modern web applications.
In this session we will take a look at some of the features that ECMAScript 6 brings to the table and see what kind of browser support is available for it.
In this session we will look at some JavaScript patterns, and how you can use them within your code.
No longer can you think of JavaScript as a kiddie-scripting language. JavaScript has now been promoted to a first-class citizen within your application - and with this, comes the need to think of better abstractions, code re-use - even thinking of establishing a common vocabulary to discuss approaches to writing better, cleaner and ultimately more maintainable JavaScript code.
This presentation covers the Guava library developed by Google (http://code.google.com/p/guava-libraries/). Guava provides collection extensions to the Java Collection API and, along with this, a cornucopia of time-saving utilities that bring Java as close as possible to some of the more functional and dynamic language competitors like Scala, Ruby, and Clojure.
This presentation focuses on the following topics: how to make Predicates and Functions; how to use new collection constructs that make life easier, including MultiMap, BiMaps, and MultiSets; how to set up and use Guava preconditions; and how to create truly immutable collections, and more. All of this is done with Java.
Finally, the Java 8 Date Time API will be in our grasps and now we will celebrate gleefully in the streets! ISO8601 and UTC standards! Immutability! Time to set attention to stun as this presentation will cover all the goods about the new Date/Time API that makes Java programming safe.
Finally, the Java 8 Date Time API will be in our grasps and now we will celebrate gleefully in the streets! ISO8601 and UTC standards! Immutability! Time to set attention to stun as this presentation will cover all the goods about the new Date/Time API in Java 8 that will finally make our code safer than before.
We cover in 90 Minutes:
ISO 8601 & UTC
Instants
Periods
Durations
Parsing
Time Zone Manipulations and Updates
Conversion from Old Java Date Times
Whatever else we can cram into the presentation
This will interactive for the most part with few slides. Bring on your questions.
Scala is known for both its clarity in some cases, and its obscurity in others. Well, this presentation sticks with the obscurity. We will cover abstract types, the Predef, implicit conversions, creating infix types, singleton types, type variance, type bounds, type variance, partially applied functions vs. partial functions, type projections, and overcoming type erasure using Manifests.
This presentation is geared in at the programmer level and not at a scientific level, with no type theory or calculus. Basic knowledge of Scala is preferrable, though not required. Asking tough questions is mandatory.
The software industry changes rapidly, but you can protect yourself
from these changes by creating code that is complicated enough that
only you can maintain it.
Of course you should not engage in obvious bad practices. The good
news is that you don't have to. You can follow idiomatic industry
practice and stay buzzword compliant with the latest trends, while
quietly spreading complexity throughout systems. Better yet, the
symptoms will show up not in your own code, but in other code that
uses your code, directly or indirectly. You will be a hero as you
lead larger and larger teams burning the midnight oil to keep systems
alive.
Practice these principles, and your code will have an
infectious complexity that guarantees you will always be needed to
maintain it.
This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
Continuous Delivery is a process for automating the production
readiness of your application every time a change occurs – to code,
infrastructure, or configuration. In the Continuous Delivery world,
rather than hone skills at predicting the future via Big Design Up
Front, Continuous Delivery emphasizes techniques for understanding
and changing code with less cost during the process. Some architectures
and engineering practices yield better designs for this
environment. This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
Almost every example of an agile project involves a single team and while many successful projects are delivered that way, most enterprise software requires the interaction of several teams. But how do we scale agile beyond a single team? What practices translate and which ones don't? In this talk we'll discuss some of the issues you'll encounter as you move agile beyond a single group and how you can keep multiple stakeholders happy. While it isn't as simple as having a “scrum of scrums” it isn't as hard as replacing every line of COBOL.
Almost every example of an agile project involves a single team and while many successful projects are delivered that way, most enterprise software requires the interaction of several teams. But how do we scale agile beyond a single team? What practices translate and which ones don't? In this talk we'll discuss some of the issues you'll encounter as you move agile beyond a single group and how you can keep multiple stakeholders happy. While it isn't as simple as having a “scrum of scrums” it isn't as hard as replacing every line of COBOL.
Technology changes, it's a fact of life. And while many developers are attracted to the challenge of change, many organizations do a particularly poor job of adapting. We've all worked on projects with, ahem, less than new technologies even though newer approaches would better serve the business. But how do we convince those holding the purse strings to pony up the cash when things are “working” today? At a personal, how do we keep up with the change in our industry?
This talk will explore ways to stay sharp as a software professional. We'll talk about how a technology radar can help you stay marketable (and enjoying your career) and how we can use the same technique to help our companies keep abreast of important changes in the technology landscape. Of course it isn't enough to just be aware, we have to drive change - but how? This talk will consider ways we can influence others and lead change in our organizations.
This is a revised and updated version of the previous talk, with current thinking from practice and the literature. The talk presents why conflicts with your manager are inevitable based on differences in priorities and perspectives, and how to plan for them. The goal is to show you how to build the loyalty relationship that allows you to get what you need when you need it.
Topics covered will include diagnosing communication styles, lessons from game theory, working within the organizational hierarchy, and lessons on how to build a relationship with your manager that still allows you the freedom to express yourself and what you really want.
Groovy isn't designed to replace Java – it just makes Java cleaner and easier to develop. This presentation will look at various tasks Java developers need to do and demonstrate ways Groovy can help.
Topics will include building and testing applications, accessing both relational and NoSQL databases, working with web services, and more.
The Spock framework brings simple, elegant testing to Java and Groovy projects. It integrates cleanly with JUnit, so Spock tests can be integrated as part of an existing test suite. Spock also includes an embedded mocking framework that can be used right away.
In this presentation, we'll look at several examples of Spock tests and review most of its capabilities, including mock objects and integration with Spring.
Gradle has fast become one of the de-facto build tool in the Java ecosystem. Gradle offers a powerful DSL to configure your builds. Whether you have a simple build, or a complex build with many moving parts, Gradle's DSL and extensible API can help you make your builds easier, and possible.
In this sessio we will start from the ground up. We will write our first Gradle script and examine Gradle's configuration vs. execution phases. We will see how Gradle uses plugins to add on functionality for free to your build and end with a look at the Gradle Plugin API to see how easy it is to write your own plugins for better reuse.
Gradle has fast become one of the de-facto build tool in the Java ecosystem. You might have used Gradle to build your project without realizing how much more Gradle has to offer. Up-to-date checking, automatic clean, file operations such as copy, move and zip and so much more are just some of the features that Gradle offers right out of the box.
In this session we will explore some of these features and how we can use them within our own builds.
Jamie Zawinski once said “Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.“. Many consider regular expressions to be indecipherable, but the truth is that every programmer should consider regular expressions an integral part of their toolkit. From the command line to your favorite text editor, from parsing user input to scraping HTML pages - once you know regular expressions you will find a use for them in almost every programming context.
In this session we will attempt to unriddle the mystery that regular expressions pose. We will start at the basics and work our way towards more complex expressions.
Architectural Briefings are interactive presentations by and for architects on specific technology topics. The purpose is to empower architects to make decisions about the choice and application of tech. Note that you don't have to have “Architect” in your title to participate, this is for anybody who makes architectural decisions.
Developers make decisions all the time, and can never have enough information and support. At Cognitect, we run a weekly Architectural Briefing as a resource for the team. Everybody participates, both attending and speaking. In this talk, we will cover
We will finish with an example briefing, showing how these ideas come together.
The basics of developing for the Android platform will be explored, from setting up the SDK to using the Android Studio IDE and the generated Gradle build files. No previous experience is required, other than a basic knowledge of Java.
After discussing how Android fits into the marketplace, we'll look at creating applications, how to use activities, and working with layouts.
Building on the the previous talk, we'll add intents, customized layouts for alternative configurations, talk about the activity lifecycle, use logging, and more.
We'll deploy to both emulators and connected devices, and change input styles.
This session will move beyond the basics cover Android persistence mechanisms, accessing RESTful web services, and more. We'll look at shared preferences, basic file I/O, and the Sqlite database. We'll also show how to operate off of the UI thread to access data from remote servers.
The application will access JSON data on a remote server, parse the data, and update the user interface based on the response.
HotSpot promises to do wonders for us by Just-in-Time (JIT) compiling the “right” code for us, but how does it makes those decisions? And, perhaps more importantly, what happens when it's wrong?
In this talk, you'll learn through real code examples just how the JVM decides to compile your code, deoptimize your code, and stop-the-world for a GC.
Early releases of Java performed poorly, but those issues largely disappeared long ago with the introduction of HotSpot. However, much of the performance advice for Java persists through hearsay from those early days.
In this talk, we'll forget the hearsay and take an objective look using benchmarking and profiling tools to find out which optimizations matter today and just as importantly those that don't.
Over the years, Java developers have learned through trial-and-error the best ways to use Java's type system. But certain parts of the type system like wildcard generics and covariant return types are
under used; in-part, because they are not well understood.
Fortunately by going back to the mathematical roots of type systems, we can understand not only wildcard types and covariance, but also why we should prefer composition over inheritance and even how compilers perform some of their optimization magic.
Just like you need to understand execution plans to write efficient SQL, it helps to understand the JVM's execution engine to write efficient Java code.
In this talk, you'll learn how Java's Just-in-Time (JIT) compiler dissects your code, and by doing so you'll learn how to write efficient code more efficiently, but also gain a better understanding of concurrency challenges as well.
Unlike earlier languages, Java had a well-defined threading and memory model from its inception, and over the years additional packages have been added to the standard library to help solve concurrency problems.
Despite all this, Java's memory model still remains poorly understood and
its concurrency libraries under utilized.
In this talk, we'll discuss concurrency in detail starting from the low-level
of the hardware up to the high-level of concurrent collections and Java 7's
new parallel fork join library.
Most of us don't want to go back to the days of malloc and free, but the garbage collector isn't always our friend.
In this presentation, you'll learn about the different garbage collection strategies used in JVMs, how to monitor garbage collection, analyze memory dumps, and why you might want to use one collection strategy instead of another.
You understand the Web. Why do you do REST so wrong?
Doing REST “right” isn't a matter of conformance or purity. It isn't about pleasing hard-to-please Restafarian personalities. It's about understanding that WYBIWYG (What you Build is What You Get). Decisions have consequences. You are free to design and implement whatever you like, but you need to understand the consequences of your choices.
REST, as defined by Roy Fielding's thesis, is a collection of architectural constraints designed to yield certain properties in deployed systems. When you take shortcuts, you simply will not receive all the benefits of loose-coupling, evolvable, flexible, scalable systems. URLs are not enough. HTTP is not enough. To fully embrace the world of REST, you must understand Hypermedia. The good news is that you already do, you've apparently just forgotten.
Here's the thing though. REST is not an endpoint, it is just a beginning. Come to this talk to hear how the story starts. We will take a deep dive into why the Web works, the implications for building Hypermedia-driven REST APIs and start to look at what this means in practice.
The cost of integrating information isn't cheap. Well, at least it isn't if you do it wrong. Chances are, you're doing it wrong.
The single most difficult aspect of data integration is the effort to achieve consensus. It isn't just that we are disagreeable people. It's also that it is a fantasy that there is a “common model” or a “global truth”. Different groups and individuals see the world differently and have different needs from information systems. Language, and therefore what we call things, isn't simply reflective of reality. It plays a constructive and interpretive role.
The problem is that our technologies force us to make choices about world views and pretend that things aren't changing constantly. This yields fragile systems and high impedance to change that cascades through our organizations. This translates to expensive, rigid and difficult to extend failure to give the business what they want.
Our friend Tim Berners-Lee and his Happy W3C Merrymakers have given us a set of technologies to help us solve these problems though. We forget that the Web he designed was not the public Web, but one to solve integration needs for complex organizations like CERN. The HTML bit that we have gotten so excited about is but a small part of the vision. We will introduce RDF and SPARQL as enabling technologies. They do not necessarily replace what you already have, but they do make it possible to share information with people you've never talked to: Collaboration without Coordination.
Webs of documents are fabulous enough. Webs of data will blow your mind.
A table is a fixed structure. A tree is as well. A graph can go on forever and be extended at any time by anyone.
The Web is an unbounded graph. It is our definition of scale. What happens when we start to use it as the basis of sharing information, not just documents. This does not necessarily mean the public Web (although it is certainly appropriate for that as well). It simply means thinking of information as a web of linked entities through discoverable relationships.
Linked data is a way of doing this, but it is also an established project connecting billions of entities from disparate, unrelated sources. How does that even work? What can you do with such a thing? And what does that mean for your organization?
By building on the ideas introduced in the Data Integration talk, we will explore how webs of data built on standards can change everything.
The surge of interest in the REpresentational State Transfer (REST) architectural style, the Semantic Web, and Linked Data has resulted in the development of innovative, flexible, and powerful systems that embrace one or more of these compatible technologies. However, most developers, architects, Information Technology managers, and platform owners have only been exposed to the basics of resource-oriented architectures.
This talk, based upon Brian Sletten's book of the same name, is an attempt to catalog and elucidate several reusable solutions that have been seen in the wild in the now increasingly familiar “patterns” style. These are not turn key implementations, but rather, useful strategies for solving certain problems in the development of modern, resource-oriented systems, both on the public Web and within an organization's firewalls.
Encryption is a powerful tool for privacy. At least that is what we're meant to think.
If you consider encryption to be a black box of magic, you should probably attend this talk.
If you think encryption will protect your secrets, you should probably attend this talk.
If you have (or haven't) been following the news, you should probably attend this talk.
The truth is, encryption can be an effective way of making it harder for people to steal your secrets. But it isn't magical, it isn't fool proof and, depending on how you are using it, may be completely useless. It is a hard topic that we'll only touch the surface on, but there are very few topics that are more crucial for our industry and profession to understand better than encryption. You don't have to understand the math (although that will help), but you do have to understand what it will and won't do for you*.
*and how implementations of it may have been intentionally compromised
Back in the day, web developers had to rely on their wits and a plethora of alert statements - to say our toolkit was spartan would be an understatement. But with the increased importance of web front ends and the rise of JavaScript MVC frameworks, a modern web developer toolkit is finally emerging.
We've evolved from text editors to full fledged IDE's with code completion and refactoring tools but our toolchain doesn't end there. With multiple testing libraries, mocking frameworks, test drivers and even code coverage tools, today's web developer gets to walk downhill on a sunny day.
By now, the importance of having a mobile solution is obvious to just about every seat in the organization…but how do develop expertise? How do we work through the inevitable politics and organizational issues? What about the technical questions about hybrid vs. web vs. native?
Before you think about a mid life career change, spend some time listening to what we did in one large company. We may not have had all the right answers, but we learned a few things along the way.
By now, the importance of having a mobile solution is obvious to just about every seat in the organization…but how do develop expertise? How do we work through the inevitable politics and organizational issues? What about the technical questions about hybrid vs. web vs. native?
Before you think about a mid life career change, spend some time listening to what we did in one large company. We may not have had all the right answers, but we learned a few things along the way.
Architecting and developing user interfaces used to be relatively easy, pick a server side framework, define a standard monitor resolution and spend your days dealing with browser quirks. But today, the landscape presents us with a plethora of screen sizes and resolutions covering everything from a phone to a tablet to a TV. How does a team embrace this brave new world knowing that the future will introduce even more volatility to the client space?
This talk will walk you through this brave new world. We'll discuss the deluge of devices the typical application must deal with today as well as looking at the plethora of client side rendering technologies. This isn't strictly a front end problem though, we need to reconsider how we've built our server side solutions as well; modern applications rely on RESTful web services delivering data to a diverse set of clients. The development landscape has changed, this talk will prepare you to face these challenges.
Many software developers point their career towards ascending to the gilded rank of Architect…but what does it mean to actually be one? While many of us labor under false pretense of abject technical decision making, the reality is often very different. You'll code less, spending more time on activities that lack an objective green/red bar. But you'll also an opportunity to impact far more than one project.
In this talk, I'll speak to my own journey. We'll touch on influencing coworkers, the importance of communication and the importance of cup of coffee.
Many software developers point their career towards ascending to the gilded rank of Architect…but what does it mean to actually be one? While many of us labor under false pretense of abject technical decision making, the reality is often very different. You'll code less, spending more time on activities that lack an objective green/red bar. But you'll also an opportunity to impact far more than one project.
In this talk, I'll speak to my own journey. We'll touch on influencing coworkers, the importance of communication and the importance of cup of coffee.
The key to understanding Clojure is ideas, not language constructs.
In this talk, we will approach Clojure via 10 Big Ideas.
Each of these ideas is valuable and useful a la carte, not necessarily
only in a Clojure together. Taken together, they beging to fill in the
picture of why Clojure is changing the way many programmers think
about software development.
The goal of core.async is to decouple producers and consumers of
information in software, without dictating specific thread or blocking
semantics, and without introducing callback hell. Also, to do all of this
as a library, portable anywhere you can run a dialect of Clojure.
Queues are a powerful tool for decoupling software
programs. Unfortunately, platforms that have powerful queueing
libraries (e.g. Java) may require blocking threads on the ends of the
queue. And the world's biggest platform (JavaScript in the browser)
doesn't even have queues or threads.
core.async builds upon the work done with Communicating Sequential
Processes,
and provides:
In this talk, we will cover the design of core.async, and then move
directly to exploring core.async's capabilities:
>!!
and take <!!
go
blocks to invert control, achieving async without threads>!
and take <!
alts!
and alts!!
Finally, we will assemble these primitives into substantial working
programs, building toward the Holy Grail of async: substantial UI
application development in the browser, with no callbacks in sight.
Traditional SQL databases have great power, via ACID transactions and
via a declarative, logic-based query language (SQL). But SQL
databases encounter two problems on the web:
SQL databases have a rigid information model, and typically a rigid
deployment model. This rigidity creates impedance mismatches both with
development languages and with emerging cloud strategies.
SQL databases struggle with the extremely high write volumes that
characterize the top four percent of the web, e.g. the Amazons,
Facebooks, etc.
Much of the effort of the NoSQL movement has gone to solve the second
problem, the problem of the four percent, under the mantra “web scale”.
Datomic solves the first problem – with a flexible information model
and a deployment model suited to the dynamic web, and to the cloud.
Datomic is for the ninety-six percent.
Datomic's information model is based on a universal relation, and an
entity abstraction over that relation. The universal relation
eliminates the join keys and join tables imposed by SQL's plethora of
specific relations, making queries easier to write, read, and
understand. Entities expose the universal relation as associative
collections, which correspond directly to the navigable, associative
style of OO, eliminating the impedance mismatch and the need for
libraries such as ActiveRecord and Hibernate.
Datomic's deployment model is designed for a virtualized world. All
components of the system are designed from the ground up to be
ephemeral. More importantly, Datomic treats storage as a separate
service. You can store your data on the local filesystem during
development, and upgrade to using a SQL database for storage for
production. When you are ready for the cloud, you can store your data
in a distributed storage such as Amazon's DynamoDB, Couchbase, or
Riak.
Datomic's flexibility comes with the expressive power you know and
love in SQL databases. Where the four percent need “web scale”, and
the complex programming model imposed by eventual consistency, most
programs would be better off with the traditional values: ACID
transactions and powerful queries. Datomic gives you these in spades:
transactions always run at the highest isolation level (serialized),
and the Datalog query language has greater expressive power than the
relational algebra at the foundation of SQL.
If you are among the ninety-six percent, and manage transactional data
of record on the web, Datomic may be a good fit for you.
The Cloud Foundry engineering teams have steadily increased their use of the Go programming language (http://golang.org) for building (or rebuilding) components, starting with the Router, and progressing through Loggregator, the “cf” CLI, and more recently the Health Manager 9000.
As a Java-developer-turned-DevOps-junkie focused on helping our customers and community succeed with Cloud Foundry, it's become clear to me that I need to add Go to my knowledge portfolio.
Go is a very interesting language, open-sourced by Google in late-2009, that takes a “less is more” (http://commandcenter.blogspot.de/2012/06/less-is-exponentially-more.html) approach to language design, but that also bakes in a powerful concurrency model.
This talk will introduce Go, delve into its distinctives, and contrast its approach with that of Java (where appropriate). We'll also write a fair amount of Go code along the way. This talk will be of particular interest to Java developers looking to add Go to their toolkits, but will also be of interest to anyone looking to learn a little bit more about Go.
Presentation on Akka. A set various tools to write concurrent, fault-tolerant applications using immutable data, asyncronous message passing using local and remote actors, software transactional memory, and supervised systems.
Akka is a middleware, but it is not your 1990s middleware. Akka is a set of various tools to write concurrent, fault-tolerant applications using immutable data, asyncronous message passing using local and remote actors, software transactional memory, and supervised systems. Akka is also part of the Typesafe stack, a stack that include the Play web framework and the Scala programming language. This Akka presentation will cover both Scala and Java style usage of Akka and give the audience a 30k view of how it comes together. While this presentation is not interactive, all demos will be available on github for those that want to “play” along with their laptops.
Time is very precious and is often threatened by phone calls, emails, co-workers, bosses, and most of all, yourself. The Pomodoro Technique reigns in unfocused time and gives your work the urgency and the attention it needs, and it's done with a kitchen timer.
In this presentation we discuss how to set up, estimate time, log time, deal with interruptions, and integrate with Agile as a team. We discuss timer software and even some of the great health benefits of the Pomodoro Technique.
In this session we will look to see how we can refactor our learning - what tools, and methodologies can we use to help us learn quicker and better - how we can create a store that gives us quick access to information when we really need it.
We all work in an industry in which not only do the tools that we use change ever few years, but one in which we have to shift the very paradigms these tools are built on! Even the most trivial of projects entails tens of different toolkits, frameworks, and languages coming together, and somehow we need to know how to leverage each one. How does one keep up? Despite all our years in schools, and our in-born nature to learn, we often are never taught how to learn. How can we learn faster, and retain even more?
In this session we will take a look at various tools and techniques available to us and see how we can make our learning effective.
Find yourself overwhelmed with hundreds of to-dos? Is your hard-drive littered with dozens of killer ideas that you started with enthusiasm and then just fizzled away? Do you feel like you are moving as fast as can but only getting to the wrong place quicker? Well perhaps this session will help.
There are various techniques and strategies available to us today that aim to help with exactly this conundrum - from Getting Things Done ™ to Personal Kanban. Unfortunately it is often easy to be extremely productive using these systems, but not very effective. After all, it's not about getting things done, but getting the RIGHT things done. In this talk we will discuss not only how to get things done, but also attempt to figure out what it is you actually need to be doing.
In this session, I will attempt to show you how you can leverage various strategies to be more effective, knock to-dos out and have fun while doing it. If time permits we will close with an overview of the tools that are available to you, and how you can use these to become a to-do list ninja :)
Statistics is hot lately, due in part to the easy availability of large data sets and the successes of people like Nate Silver. These aren't your father's statistics, however. A quiet revolution has swept through the field, shifting it from traditional frequentist methods toward a more Bayesian approach. This talk will discuss Bayes' Theorem and show you how to do simple, back-of-the-envelope calculations to apply it to a wide variety of problems.
In addition, we'll also talk about common errors non-experts make when dealing with statistical conclusions, ranging from small sample size issues to the use of arbitrary endpoints to the problem of overfitting and more.
Java 8 is finally released. We haven't seen this number of changes to Java the language since Java 5. This session will provide a review of the changes to Java 8 with a focus on the language changes such as the addition of lambdas which will either have you excited for a language feature which allows you to express what you want vs. how you want it or will have you concerned about readability in your code base.
This session will introduce the changes of Java 8 including language changes such as:
and will provide details of the JVM changes such as:
So your server is having issues? memory? Connections? Limited response? Is the first solution to bounce the server? Perhaps change some VM flags or add some logging? In todays Java 6 world, with its superior runtime monitoring and management capabilities the reasons to the bounce the server have been greatly reduced.
This session will look at the Java monitoring and management capabilities, which includes the ability to make VM argument changes on the fly. This session will dive into the different memory compartments and how they are used by the JVM. Final this session will explore the different GC options and how they affect response times and throughput.
cloud architecture… an architectural walk through cloud services and components
high level
Data Centers / VDC
CDN
Monitoring
Load Balancing
Queue
Storage (s3, etc)
DNS
Search
Routing
(Amazon + Open Source) example: CloudSearch vs ElasticSearch
Security
low level
HAProxy
Nerve
synapse
queues
compute
dns
cloud architecture… an architectural walk through cloud services and components
high level
Data Centers / VDC
CDN
Monitoring
Load Balancing
Queue
Storage (s3, etc)
DNS
Search
Routing
(Amazon + Open Source) example: CloudSearch vs ElasticSearch
Security
low level
HAProxy
Nerve
synapse
queues
compute
dns
For decades object-oriented programming has been sold (perhaps over sold) as the logical programming paradigm which provides “the way” to software reuse and reductions in the cost of software maintenance as if it comes for free with the simple selection of the an OO language. Even with the renewed interests in functional languages, the majority of development shops are predominately using object-oriented languages such as Java, C#, and Ruby. So most likely you are using an OO language… How is that reuse thing going? Is your organization realizing all the promises? Even as a former Rational Instructor of OOAD and a long time practitioner, I find great value in returning to the basics. This session is a return to object-oriented basics.
This session is intended to balance the often-touted theoretical object-oriented practices with lessons from the real world. The session will start with a review of some of the basics regarding abstractions and encapsulation. Although simple concepts, we will push the boundary of how these techniques are applied. We will discuss the difference between analysis and design and how that is reflected in our code. We will also look at the limitations of Java the language as outlined in Josh Block’s book “Effective Java”. The session will go past the basics of object-oriented principles and into what our true goals of development really are.
In the words of John Gage, “The network is the computer”. At the heart of everything we do is a complex system of infrastructure from which we are often abstracted. For general application development this abstraction provides the convenience of simplifying our efforts. With a growing number of mobil applications with intermittent connectivity and higher latency, and with increased hostility on the network from a security standpoint, there is great value in pulling back the curtain and understanding the details of this computer.
This session will start with the underlying understanding of networking at a low level. At this level we will discuss, IP, MAC, ARP, DNS and DHCP. As we walk up an abstraction level, we discuss sockets, NAT, gateways and firewalls along with the use of TCP and UDP. Spending some time at this layer can make network developers more productive, as we look at tools which help us answer the question, “who owns this port?“, “where is this packet going?” and “What is my latency and why?“.
The session will end with a little fun looking at wifi, where will we sniff, snort, crack:) From a security stand point we will look at the challenges of wifi and how it has become the weakest component on the net.
This session is extremely fast-pace. The attendees will come away with a more enhanced understanding of the this thing we call the internet. It will include through discussion or demonstration tools such as tcproute, tcpdump, nemesis, nmap, tcpmon and wireshark.
There is nothing better than looking at real-world examples to understand project failures and project successes. This session is intended to be an open conversation, based closely to a birds of a feature (BOF) session, however it will have a series of “that happened to me” topics throughout discussed from the perspective of technology.
Discussed will be a clients dating back from 2005. The actually client and there name will not be revealed, but the industry, the contraints and some of the outcomes will.