A little old lady once challenged a well-known scientist’s explanation on the structure of the universe, countering that the world is really a flat plate supported on the back of a giant tortoise. The scientist rebutted the little old lady’s challenge with one of his own by asking what the tortoise was standing on. The little old lady’s sly reply was that it’s, “turtles all the way down.” So too is software architecture “turtles all the way down”.
In this session, we cover a broad range of topics that include challenging traditional practices of software architecture, examining what it takes to bring down the ivory tower, probing the paradoxical aspects of architecture’s goal, and investigating the inextricable link between temporal decisions and structural flexibility. From the highest level applications and services to the code that exists in the bowels of the system, and everything in between, we explore how an effective software architecture must be turtles all the way down. In the end, we will all have gained deep insight to the value of agile architecture.
The modularity patterns provide us with proven design techniques to develop a modular software architecture that is extensible, reusable, maintainable, and adaptable. In this session, we’ll explore 9 of the 18 modularity patterns.
This session introduces and examines the following patterns:
The modularity patterns provide us with proven design techniques to develop a modular software architecture that is extensible, reusable, maintainable, and adaptable. In this session, we’ll explore the remaining 9 modularity patterns.
This session introduces and examines the following patterns:
We know agile methods are proven on small teams, but really…almost any process works with a team of one. As the team scales, however, the challenges mount. In this session, we focus on proven practices that help large software development teams in excess of 100 developers maintain agility. We also explore how to scale agile across the enterprise.
Come learn about ways to increase transparency on large projects, structure the development team, garner the important feedback you need from customers, and avoid the lack of rhythm so common on large teams. We’ll explore how large teams can conduct effective code reviews, maintain a consistent architectural vision, take advantage of automation, and build a development process that is sustainable on large teams. In lieu of focusing on specific agile methods in this session, we turn our attention to practices that the development team can leverage today to get started on the road to increased agility.
Getting software released to users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours–sometimes even minutes–no matter what the size of a project or the complexity of its code base.
In this workshop we take the unique approach of moving from release back through testing to development practices, analyzing at each stage how to improve collaboration and increase feedback so as to make the delivery process as fast and efficient as possible. At the heart of the workshop is a pattern called the deployment pipeline, which involves the creation of a living system that models your organization's value stream for delivering software. We spend the first half of the workshop introducing this pattern, and discussing how to incrementally automate the build, test and deployment process, culminating in continuous deployment.
Getting software released to users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours–sometimes even minutes–no matter what the size of a project or the complexity of its code base.
In this workshop we take the unique approach of moving from release back through testing to development practices, analyzing at each stage how to improve collaboration and increase feedback so as to make the delivery process as fast and efficient as possible. At the heart of the workshop is a pattern called the deployment pipeline, which involves the creation of a living system that models your organization's value stream for delivering software. We spend the first half of the workshop introducing this pattern, and discussing how to incrementally automate the build, test and deployment process, culminating in continuous deployment.
Getting software released to users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours–sometimes even minutes–no matter what the size of a project or the complexity of its code base.
In the second half of the workshop, we introduce agile infrastructure, including the use of Puppet to automate the management of testing and production environments. We'll discuss automating data management, including migrations. Development practices that enable incremental development and delivery will be covered at length, including a discussion of why branching is inimical to continuous delivery, and how practices such as branch by abstraction and componentization provide superior alternatives that enable large and distributed teams to deliver incrementally.
Getting software released to users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours–sometimes even minutes–no matter what the size of a project or the complexity of its code base.
In the second half of the workshop, we introduce agile infrastructure, including the use of Puppet to automate the management of testing and production environments. We'll discuss automating data management, including migrations. Development practices that enable incremental development and delivery will be covered at length, including a discussion of why branching is inimical to continuous delivery, and how practices such as branch by abstraction and componentization provide superior alternatives that enable large and distributed teams to deliver incrementally.
Have you ever wondered what goes on inside the virtual machines and
interpreters that we use every day?
In this session, you'll see some of the inner workings of Python, PHP, Java, and JavaScript and learn that at the lowest level these languages really are not as different as they may seem.
An overview of the basic parts of a virtual machine and interpreter that compares and contrasts the different approaches taken by different modern languages.
In part II of VMs and Interpreters, you'll dive into the inner workings of the Java Virtual Machines. You'll learn how your Java code is translated from Java source code to byte code and ultimately machine code.
You'll also see tools for viewing Java byte code and measuring performance changes caused by VM optimizations.
This session will cover the…
Unlike earlier languages, Java had a well-defined threading and memory model from its inception, and over the years additional packages have been added to the standard library to help solve concurrency problems.
Despite all this, Java's memory model still remains poorly understood and
its concurrency libraries under utilized.
In this talk, we'll discuss concurrency in detail starting from the low-level
of the hardware up to the high-level of concurrent collections and Java 7's
new parallel fork join library.
Most of us don't want to go back to the days of malloc and free, but the garbage collector isn't always our friend.
In this presentation, you'll learn about the different garbage collection strategies used in JVMs, how to monitor garbage collection, analyze memory dumps, and why you might want to use one collection strategy instead of another.
There is a good amount of excitement about the new version of Java. The big
evolution of course is the lambda expressions. In this presentation we will dive into the language features in Java 8, take a look at some of their nuances, and look at ways to put them to good use.
Java 8 language capabilities and application.
Over the past few years a few JVM languages have gained adoption. Suppose on your next project, you're asked to use an alternate language. Which one would you pick? In this presentations, we'll put our personal biases away, look far beyond the syntax, and into the capabilities, the strengths and weaknesses of select prominent JVM languages.
We will objectively compare them and create a configurable metric that will help us select the right language for that next project.
JavaScript has a mixed heritage: OO and Functional. To date, us developers have focused on the OO side of JavaScript and not much mind-share has been given to the other, more powerful side. In this session we'll explore how to use the power of functional Javascript.
JavaScript has elements of two distinct programming languages: Self and Scheme. These two languages are very different - and some of JavaScript's weirdness is due to this mixing of very different language designs. The conceptual models are also very different between Self and Scheme - one is a prototypical object based language, while the other is a functional language. In this session, we'll discuss the elements of how the Scheme functional programming language manifest in JavaScript. We're going to explore how you can write JavaScript in a more elegant and powerful way by applying functional concepts.
So you think you've picked up enough JavaScript to be dangerous, but feel like the whole prototypical language thing is still a mystery. In this session, we'll go from basic JavaScript to advanced JavaScript. We'll discuss and code modular JavaScript with CommonJS. We'll look into the details of a prototype language and discuss things like parasitic inheritance. We'll also look at JavaScript libraries that will help you get the most out of JavaScript - not jQuery, but a library like UnderscoreJS and SugarJS.
This is a fast paced session meant to bring you up to speed with the latest and greatest JavaScript techniques and tools. Whether you're building client side JavaScript with HTML5 or Appcelerator Titanium, or server-side JavaScript with node.js, you'll come away with knowledge and patterns for how the pro's use JavaScript for building real apps.
We've come a long way down the JavaScript road. Gone are the days of 'just hack it' for the web - architecting even a small project in JavaScript can be a challenge. Thankfully, there are several frameworks to help you; the most popular currently is Backbone.js.
Before you start using a framework in JavaScript, you will want to understand the techniques expert JavaScript programmers use to build them. In this session, we'll dive into design patterns in JavaScript, and do live coding so you can see these patterns applied. Even if you're not using a framework, you can use these design patterns to make your code more maintainabile, elegant, and concise.
Ever wish you could use your JavaScript-foo to build a NATIVE mobile app? Wish there was an open-source platform that would let you build awesome cross-platform mobile apps? Come to this session and learn about Titanium, an open-source, JavaScript based platform for creating native mobile apps.
Titanium is an open-source development tool for producing cross-platform mobile applications by Appcelerator. Using Titanium, you develop your mobile application using Javascript coded against the Titanium API's. The Titanium platform invokes their builder to take your Javascript and build a native application for iOS and Android.
This session will walk you through the details of building great apps for the Android and iOS platforms. We'll talk about Titanium development, its ecosystem, and architecture. We'll spend time looking at lots of code - we'll build an app, in fact, while we discuss and explore the framework. We'll also spend some time discussing best practices, what to expect when developing against it, and the limits of this type of development.
Have you looked into Scala? Scala is a new object-functional JVM language. It is statically typed and type inferred. It is multi-paradigm and supports both object oriented and functional programming. And it happens to be my favorite programming language.
If you are interested in Scala, how you are planning to learn Scala? You probably are going to pick up a book or two and follow through some examples. And hopefully some point down the line you will learn the language, its syntax and if you get excited enough maybe build large applications using it. But what if I tell you that there is a better path to enlightenment in order to learn Scala?
Scala Koans, a set of test cases that will teach you Scala language. The Scala koans will help the audience learn the language, syntax and the structure of the language through test cases. It will also teach the functional programming and object oriented features of the language. Since learning is guided by failing tests it allows developers to think and play with the language while they are learning.
Namaste,
For those planning to attend the Scala Koans…
Welcome to Scala Koans!
Scala Koans is an interactive session that puts the programming and learning in your hands. Therefore, a laptop is required by all participants. If you do not have a laptop, then perhaps you have a friend with a laptop, is so, well, that would work too. In order to participate in the Scala Koan endeavor, a few things are required:
The process of actually running the koans will be covered during the session. Unfortunately, Internet connectivity is sometimes a dicey affair and at times it can rain on our parade. To avoid having to wait for the install at the conference you can prepare for the koans before the conference! If you don't have the opportunity to do this, we will have either memory sticks or private networks at the conference.
If you want to get started with the set up:
Before attending the koans session, you may want to take the opportunity to load some Scala Plugins onto your favorite IDE and Editor. Below is a list of resources that you can use to enhance your environment so that you can enjoy Scala syntax highlighting and other helpful tools like refactoring, debugging and analysis.
Eclipse - The Eclipse has an IDE plugin for Scala called aptly scala-ide. All the information about the plugin can be found at http://scala-ide.org including an easy to follow along video located at http://scala-ide.org/docs/current-user-doc/gettingstarted/index.html
IntelliJ - IntelliJ has a Scala plugin that can be found by going to Settings -> Plugins, clicking on 'Browse Repositories' button and searching for the 'Scala' plugin on the left. Right click on the 'Scala' and choose 'Install'. IntelliJ will prompt you to restart the IDE, do so, and enjoy.
NetBeans - Currently, Github user 'dcaoyuan' hosts a NetBeans Scala plugin at the address: https://github.com/dcaoyuan/nbscala. I have not tried this out since the number of NetBeans users has shrunk in recent years. If you are an avid NetBeans user, and wish to try it, you can let me know the results during the session. There is additional information at: http://wiki.netbeans.org/Scala
Emacs - Github user 'aemoncannon' has created 'ENSIME' (ENhanced Scala Interaction Mode for Emacs) at the address and has a great following. https://github.com/aemoncannon/ensime with some documentation at http://aemoncannon.github.io/ensime.
VIM - For VIM users you can use https://github.com/derekwyatt/vim-scala as a VIM plugin that offers Scala color highlighting
That is it. Hope to see you soon.
This is the first in a new series on resource-oriented systems. The goal of the series is to provide practical guidance on the design and implementation of next generation systems that are flexible, extensible, high-performance and future-friendly. The talks are designed to work as arc, building upon each other, but they should also stand alone. The first topic is a guided walk through of building quality REST APIs.
We will focus on the architecture of the Web and how it can help us model and manipulate our important business concepts. We will discuss the role of stable identifiers, intentional representation design, hypermedia affordances and architectural consistency. The goal is not to be “RESTful”, the goal is to build systems that display the properties we require.
This talk will be accessible for people new to REST, but also different enough that those who have attended previous REST talks will learn new things.
This is the second in a new series on resource-oriented systems. The goal of the series is to provide practical guidance on the design and implementation of next generation systems that are flexible, extensible, high-performance and future-friendly. The talks are designed to work as arc, building upon each other, but they should also stand alone. This second talk is an introduction to the use of Semantic Web technologies to enable collaboration without coordination.
REST is a means to an end, but it is not a satisfactory end state. It usually pushes complexity to the client in ways that make data integration difficult across multiple sources. The W3C Semantic Web initiative introduces us to new technologies for linking resources and querying across them in powerful new ways. We will learn about the RDF model, what it brings to the table and how we can use it connect information regardless of where and how it is stored. We will use the SPARQL protocol and query language to ask powerful questions of arbitrary resources. We will also see how we can create new information just by asking for it.
This is the third in a new series on resource-oriented systems. The goal of the series is to provide practical guidance on the design and implementation of next generation systems that are flexible, extensible, high-performance and future-friendly. The talks are designed to work as arc, building upon each other, but they should also stand alone. This third talk will introduce you to RDFa, one of the most exciting technologies estimated to be used on at least 25% of the indexed Web.
We understand that documents contain information, but it is usually only accessible to humans if they know where and how to find them. What if we could automatically extract arbitrary information about arbitrary domains and connect it to information held elsewhere? What if we could use the information in a document to help us organize our content better? What if this embedded information could help external search engines index the public Web better and improve your rankings?
This talk will show you how to weave and extract information in HTML, XHTML and arbitrary XML using standard tools such as RDFa. In the process, you will learn how to free the information so that it may be reused in powerful and unanticipated ways.
If you're not terrified, you're not paying attention.
Publishing information as webs of data does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources.
This is the fifth in a new series on resource-oriented systems. The goal of the series is to provide practical guidance on the design and implementation of next generation systems that are flexible, extensible, high-performance and future-friendly. The talks are designed to work as arc, building upon each other, but they should also stand alone. This fifth talk will introduce you to the idea of leveraging popular open source visualization libraries (such as D3.js) to highlight features of integrated resources.
A picture is said to be worth more than a few words. Visualization is a strategy for conveying complex relationships, concepts and unexpected situations. It is an exciting time in the data visualization space because of all the new libraries, but the combination of rich data and rich visualization tools is even more compelling. Once our information is available in resource-oriented systems, we can visualize not only individual sources, but merged content as well.
Today’s interconnected world requires that organizations rapidly deliver flexible-integrated solutions. The conventional approach is to integrate heterogeneous applications using web services but unfortunately that tends to tightly couple those applications. In this session we will explore several alternatives for achieving Enterprise Integration Agility.
Public Web APIs are increasing at an exponential rate resulting in an ever more connected web. This connected contagion is not just relegated to the domain of Web 2.0 but has infected the corporate world. In fact, companies are becoming more reliant on Software as a Service (SAAS) to provide key business functions.
Combating this contagion requires an approach that provides a type of insurance against constant change and lays the foundation for evergreen enterprise solutions. In this session we will explore three popular architectural styles including Message Oriented, Service Oriented, and Resource Oriented Architecture that are used to achieve Enterprise Integration Agility. In addition, I will provide examples of each architectural style using ActiveMQ/Camel, Mule ESB, and NetKernel.
In this session, I will demonstrate several concurrent processing techniques including Fire and Forget, Fork-Join, Producer-Consumer, and Asynchronous Web Services using the Java Concurrency Library, the Akka Framework and the Spring Framework.
Traditional concurrent development on the Java Platform requires in depth knowledge of threads, locks, and queues. Fortunately, new languages and frameworks that run on the Java Platform have made concurrent processing easier. This session apply concurrent processing patterns and techniques using several popular libraries and frameworks.
In this session you will learn to strategically introduce technology innovations by applying specific change patterns to groups of individuals. Using these patterns and related techniques will not only benefit your organization but will ultimately benefit your career as a technologist by making you a better influencer, writer, and speaker.
The rapid pace of technological innovation has enabled many organizations to dramatically increase productivity while at the same time decrease their overall headcount. However, the vacillating global economy combined with “change fatigue” within organizations has resulted in a risk averse culture. In such an environment how can one possibly introduce and inculcate the latest technology or process within an organization? The answer is to have a solid understanding of Diffusion Theory and to leverage Patterns of Change.
Prezi Location: http://prezi.com/b85wwmw7hccn
Robert Martin assembled the SOLID family of principles to provide a useful guide to help us create object-oriented software designs that were resilient in the face of change. In recent years, the need to write highly-concurrent software in order to leverage increasingly ubiquitous multicore architectures, as well as general interest in more effectively controlling complexity in large software designs, has driven a renewed interest in the functional programming paradigm. Given the apparent similarity in their goals, “What is the intersection of SOLID with functional programming?” is a natural question to ask.
In this talk, we'll explore this intersection. We'll begin with a tour of the evolutionary patterns associated with enterprise software and programming paradigms, as well as take a look at the ongoing quest for best practices, the goal being to elucidate the motivation for examining this intersection of SOLID and functional programming. We'll then walk through each of the SOLID principles, examining them in their original object-oriented context, and looking at example problems and solutions using the Java language. Then for each principle, we'll examine its possible intersection with the functional programming paradigm, and explore the same problems and solutions using the Clojure language. We'll close by examining the transcendent qualities of the SOLID principles and how they can make any design simpler, regardless of the programming paradigm employed.
For much of the last two years I've delivered a two-part series at NFJS shows entitled “Effective Java Reloaded.” For all pracical purposes, it is an ala carte style rehash of the book Effective Java, written by Josh Bloch. One of my favorite parts of the discussion is of Item #15, which tells us to “Minimize Mutability.” If we turn this inside out, we're actually saying that we want to MAXIMIZE IMMUTABILITY. When we do this, we reap many benefits, such as code that is easier to reason about and that is inherently thread-safe. This can carry us a long way in the direction of program correctness and decreased complexity. However, when we start to program with immutability, several major questions arise.
First, the necessity of using a separate object for each distinct value, never reusing, or “mutating” an object, can quickly cause performance concerns. These concerns are amplified when we're talking about large collections such as lists and maps. These problems are largely solved by what we call “persistent data structures.” Persistent data structures are collections from which we create new values, not by copying the entire data structure and apply changes, but by creating a new structure which contains our changes but points at the previous structure for those elements which have not changed. This allows us to work with data structures in a very performant way with respect to time and resource consumption. We'll examine persistent data structures, their associated algorithms, and implementations on the JVM such as those found in the TotallyLazy library.
Second, because all of an immutable object's state must be provided at the time of construction, the construction of large objects can become very tedious and error prone. We'll examine how the Builder pattern can be applied to ease the construction of large objects, and we'll examine Builder implementations in Java and Groovy.
Third, we run into problems when we start to use frameworks that expect us to program in a mutable style. A prime example is Hibernate, which expects our persistent classes to follow the well-worn JavaBean convention, including a no argument constructor and getters and setters for each property. Such a class can never be mutable! So how do we program with frameworks such as Hibernate and yet still minimize mutability? The key is found in not letting frameworks dictate the way that you design your code. Just because the framework require something, don't let it force you to make the wrong decision. Use the framework as a tool to write your code, don't let your code be a tool of the framework. We'll examine strategies for doing exactly that.
You should come away from this talk better equipped to program in a way that minimizes mutability and maximizes immutability.
No matter where you slice software engineering:
The root cause of many, if not most problems, is the common absence of critical thinking in how we approach decision making. Instead of thinking critically about our engineering decisions, we often follow a Cargo Cult mentality or blindly follow the pronouncements of the Blowhard Jamboree. The end results all too often include suboptimal productivity, excessive spending, poor quality and cancelled projects.
When we think instead critically about a component of software engineering, we take it apart. We discard our presuppositions. We challenge tradition. We gather our own evidence. We question everything.
This talk will examine the pathologies associated with not thinking critically, including a tour of the antipatterns that can emerge from such a practice. We'll then walk through the concentric circles of the critical thinking process, including evidence evaluation, argument evaluation, and argument construction. You'll leave this session with a critical thinking framework which can be applied to software engineering as well as beyond.
The basics of developing for the Android platform will be explored, from setting up the SDK to using the Android Studio IDE and the generated Gradle build files. No previous experience is required, other than a basic knowledge of Java.
After discussing how Android fits into the marketplace, we'll look at creating applications, how to use activities, and working with layouts.
Building on the the previous talk, we'll add intents, customized layouts for alternative configurations, talk about the activity lifecycle, use logging, and more.
We'll deploy to both emulators and connected devices, and change input styles.
After almost a decade and several significant releases, Spring has gone a long way from challenging the then-current Java standards to becoming the de facto enterprise standard itself. Although the Spring programming model continues to evolve, it still maintains backward compatibility with many of its earlier features and paradigms. Consequently, there's often more than one way to do anything in Spring. How do you know which way is the right way?
In this session, we'll explore several ways that Spring has changed over the years and look at the best approaches when working with the latest versions of Spring.
In this presentation, we'll see how to use Spring to create, secure, streamline, hyperlink, and consume REST APIs.
In modern applications, there are a diverse array of clients consuming content from the web. Each of these clients has unique capabilities and limitations, therefore demanding presentation of the application to be tailored to each device. As a result, presentation logic is often pushed into the client itself, leaving the application to serve a common data-oriented lightweight API to be consumed by each client.
In this session, we're going to combine the magic of Spring Boot and the magic of Spring Data to yield something even more powerful. You'll see how to quickly build an application's persistence layer, whether it stores data in a RDBMS, Mongo, Neo4j, or several other popular data stores. You'll also see how to create a functioning REST API with nothing more than an interface and a domain type.
Spring Boot dramatically simplifies application development with Spring. But before Spring Boot came along, Spring Data was already making developers' lives easy when it comes to working with data. When combined, Spring Data and Spring Boot can make data persistence the easiest part of your application.
There is nothing better than looking at real-world examples to understand project failures and project successes. This session is intended to be an open conversation, based closely to a birds of a feature (BOF) session, however it will have a series of “that happened to me” topics throughout discussed from the perspective of technology.
Discussed will be a clients dating back from 2005. The actually client and there name will not be revealed, but the industry, the contraints and some of the outcomes will.
The Gradle build tool is one of the most successful projects in the Groovy ecosystem because it addresses a difficult problem – every major build is a custom build. Gradle builds are written in Groovy, so the full power of the language is available if you need it. Gradle supports Maven project structure and repositories and uses Ivy dependency management without being bound by their normal constraints. With major systems like Grails, Hibernate, and the Spring Framework moving to Gradle, this is a technology worth taking the time to understand.
This talk will cover the basics of Gradle both through simple examples and by examining the build files for major open source projects.
The Single Page App, or SPA, require the developer thinks about architecture in new ways compared to traditional server-side page generation web apps. Whether you've used Java web frameworks like Spring Web, or Struts, dynamic frameworks like Grails or Rails, or even Django or PHP, you will need to learn some new techniques for building SPA type applications. We discuss the architecture and design of Service Oriented Front End Architectures (SOFEA) in this session.
In this session, we'll look at structuring your app from both the client and server side. Using a SPA framework like Backbone.js or Ember can help, but does not answer critical design questions. This is a fast paced session where we'll talk about SPAs, REST, SEO, and much more.
There's a bevy of options for developing mobile apps. If you're looking at cross-platform solutions, there's a multitude of options to choose from. In this session we'll explore the three basic categories for developing mobile apps: native, cross-platform-to-native, and mobile web. We'll discuss the sweet spot for each of these three approaches and the benefits and drawbacks of each. Technologies discussed include Android, iOS, HTML5/CSS3, Phonegap, Titanium, and jQuery Mobile.
There's a bevy of options for developing mobile apps. If you're looking at cross-platform solutions, there's a multitude of options to choose from. In this session we'll explore the three basic categories for developing mobile apps: native, cross-platform-to-native, and mobile web. We'll discuss the sweet spot for each of these three approaches and the benefits and drawbacks of each. Technologies discussed include Android, iOS, HTML5/CSS3, Phonegap, Titanium, and jQuery Mobile.
Relational databases have ruled the world since the dawn of time (or so it appears). They power our enterprises and for many in the corporate world, it may be hard to imagine life without them. Each decade a novel idea would challenge the status quo and make a case to deviate for the tradition. A flock of enthusiastic programmers, like your humble speaker back in the early 90s, would throw their support around it, only to be crushed eventually by the large vendors and enterprise standards. But, the excitement around NoSQL has shown that enterprise data is not the only thing that's persistent.
In this presentation we'll learn how NoSQL deviates from those deep rooted traditional approaches, and how this may be useful. We will also discuss the situations where these types of databases may be more appropriate.
As a web application developer, most of the focus is on the user stories and producing business value for your company or clients. Increasingly however the world wide web is more like the wild wild web which is an increasingly hostile environment for web applications. It is absolutely necessary for web application teams to have security knowledge, a security model and to leverage proper security tools.
This training workshop on security will provide an overview of the security landscape starting with the OWASP top ten security concerns with current real world examples of each of these attack vectors. The first session will consist of a demonstration and labs using hacker tools to get an understanding of how a hacker thinks. It will include a walk through of the ESAPI toolkit as an example of how to solve a number of these security concerns including hands-on labs using the OWASP example swingset.
The workshop will include several hands on labs from the webgoat project in order to better understand the threats that are ever so common today.
Attendees will come away with the following skills / capabilities:
Don't be the weakest link on the web!
As a web application developer, most of the focus is on the user stories and producing business value for your company or clients. Increasingly however the world wide web is more like the wild wild web which is an increasingly hostile environment for web applications. It is absolutely necessary for web application teams to have security knowledge, a security model and to leverage proper security tools.
This training workshop on security will provide an overview of the security landscape starting with the OWASP top ten security concerns with current real world examples of each of these attack vectors. The first session will consist of a demonstration and labs using hacker tools to get an understanding of how a hacker thinks. It will include a walk through of the ESAPI toolkit as an example of how to solve a number of these security concerns including hands-on labs using the OWASP example swingset.
The workshop will include several hands on labs from the webgoat project in order to better understand the threats that are ever so common today.
Attendees will come away with the following skills / capabilities:
Don't be the weakest link on the web!
Often we avoid testing the most important things simply because they are hard to test,
but doing so runs the risk that these untested bits of code may fail at the worst
possible moment.
In this talk, we'll explore some creative (and crazy) ways to test some of the hardest
problems including:
In modern applications, Javascript is increasingly prevalent both on the client-side and to some degree on the server-side. As we continue to crank out more Javascript code, we're finding that many of the same hard-lessons we learned in writing decoupled Java code are equally desirable in Javascript code. Without the benefit of dependency injection and AOP, both Java and Javascript code can quickly become an unnavigable and untestable mess.
Where frameworks like Spring have helped us gain control over our Java code, Cujo.js similarly aims to give our Javascript code more structure and testability.
In this session, we'll look at Cujo.js, an “unframework” that provides dependency injection that takes Javascript's unique needs into consideration to create loosely-coupled code. We'll also see how, although Cujo.js isn't strictly a UI framework, elements of Cujo.js can be brought together to elegantly build client-side UIs.
Redis may be the most misunderstood of the already highly misunderstood NoSQL
databases. This misunderstanding is a shame because Redis might also be the most
versatile and can work well in conjunction (not in replacement of) with a traditional
relational database.
Because hile nominally a key-value store, Redis is much more because each value can be a
document, a large bit field, a counter, a hash, a set, a sorted set, a list, or a queue.
This variety of data structures allows Redis to adapt to many different problems
including caching and queueing.
In this talk, you'll learn about the fundamental data structures that Redis supports and
see different ideas on how you might use them.
What's the point attending a conference unless you do something with the knowledge you gain?
In this session we look at practical strategies for selecting new technologies and proven approaches for driving adoption back at the office.
Invoke dynamic is a Java 7 feature that had the most impact at the bytecode level and also in terms of performance. First perceived as a feature to help dynamically typed languages on the JVM, it turned into a powerful feature that has been exploited quite a bit in the implementation of features in the Java
language itself.
In this presentation we will understand what problem this features really solves and how it has influenced other features in the Java language and on the Java platform.
Have you ever wished that your local development sandbox could look exactly like production, but you've got a mismatch between your local OS and your production OS? And what about the age old “it works on my machine” excuse that quite often stems from differences between developer sandboxes? Many have turned to virtualization, creating a machine image that can be passed around the team. But who manages the template? How do you keep things in sync?
In this session, we'll explore Vagrant (http://www.vagrantup.com), an open source tool that allows you to easily create and manage virtual development environments that can be provisioned on demand and “thrown away” when no longer needed.
Vagrant is most powerful when we think of it as a tool to enable various workflows that are useful to software development teams. In this talk, we'll walk through the following workflows and examine Vagrant's contributions:
Statistics is hot lately, due in part to the easy availability of large data sets and the successes of people like Nate Silver. These aren't your father's statistics, however. A quiet revolution has swept through the field, shifting it from traditional frequentist methods toward a more Bayesian approach. This talk will discuss Bayes' Theorem and show you how to do simple, back-of-the-envelope calculations to apply it to a wide variety of problems.
In addition, we'll also talk about common errors non-experts make when dealing with statistical conclusions, ranging from small sample size issues to the use of arbitrary endpoints to the problem of overfitting and more.