Modularity is coming to the Java platform! Java 9 will introduce the Jigsaw module system. OSGi is here today. But don’t wait to start designing modular software. Contrary to popular belief, you don't need a framework or a new runtime to start building modular software applications. You can start today. Learn how!
In this session, we'll examine what it means to develop modular software on the Java platform. We'll examine the goals and benefits of modular software, and explore the patterns of modular architecture that help us develop modular software systems. With just a few easy steps, we'll see how to transform our software from a huge monolith to an extensible system of collaborating software modules. By examining an existing software system, we'll see first hand how we can increase software modularity with minimal disruption. You'll walk away not just with a much deeper understanding of the benefits of modular software, but also a migration roadmap for refactoring existing applications to increase their modularity. In other words, you'll see how to get ready today for the application platform of tomorrow.
OSGi is the dynamic module system for the Java platform. Today, OSGi is a major part of most application platforms, tools, and is supported by many major frameworks. In this session, we'll explore the fundamental underpinnings of OSGi, explore the OSGi ecosystem, clearly articulate the benefits of OSGi, and examine OSGi in the context of other microservice architectures.
After a gentle introduction to OSGi, we'll explore how OSGi allows us to overcome classpath hell and design very adaptable and flexible software systems. Through several coding examples, we'll explore numerous OSGi techniques, including versioning, hot deployment, isolation, lifecycle, and OSGi specific microservices.
Traditional approaches to software architecture are broken. Attempts to define the architectural vision for a system early in the development lifecycle do not work. In today’s volatile technology and business climate, big architecture up front is not sustainable. In this session, we will explore several principles that help us create more flexible and adaptable software systems. But first, we’ll expose the true essence of what’s meant when we say “architectural agility.”
What’s the goal of architecture? To serve as a blueprint of the system that everyone understands? Possess the flexibility to evolve as new requirements emerge? To satisfy the architectural qualities, including performance, security, availability, reliability, and scalability? Yes. Yes. Yes. At the heart of these three questions are the three pillars of architecture - social, process, and structure. But how do we create software architectures that achieves all of these goals? And how do we ensure no disconnect occurs between developers responsible for implementation and architects responsible for the vision? In this session, we’ll explore several principles to increase architectural agility and provide some actionable advice that will help you get started immediately.
New architectural paradigms are emerging that challenge traditional assumptions about the way that scalable and adaptable software is built. At the heart of these paradigms is a modular approach that breaks apart the monolithic application into microservices. But breaking apart the monolith has implications beyond software architecture and microservices are just one implementation alternative.
In this session, we will explore modularity’s fundamental role in a large scale software architecture. We’ll compare and contrast different implementation technologies for building modular architectures. And we’ll discover the impact that modern architecture has on infrastructure and methodology. Finally, we will examine how modern web and mobile apps fit into this overall architectural story.
Microservice architecture is a modern architectural approach that focuses on breaking apart the monolith and building modular services. But the framework we use has a tremendous impact on how we build and deploy services. A new type of framework has emerged that provides a lightweight stack for building microservices.
In this session, we will explore some modern Java micro frameworks for building microservices. Example frameworks you may see include Dropwizard, Spark, Ninja, RestExpress, Play, Restlet, and RestX.
We've been building Java applications the same way for the greater part of a decade. But these monolithic applications have shortcomings that prevent us from increasing development agility and architectural flexibility. In this session, we'll build a simple system with a plugin and demonstrate it's advantages across the full software development lifecycle.
Are you familar with the Open Closed Principle (OCP) that states “software should be open for extension but closed to modification.” If so, then you know that OCP is a class design principle. But what happens when we extend the definition of OCP and start applying the idea at an architectural level? The answer is that we realize a positive impact on development team structure, software builds, and runtime flexibility. In this session, we'll build a simple system with a plugin architecture and demonstrate how OCP applied at an architectural level opens a new world of possibilities.
The new facilities in Java 8 is about the change the way we write code. Our code will become
more expressive and concise. But, exactly how?
In this presentation we will take several common Java code examples, discuss the core idea expressed in code, and transform that code to use the facilities in Java 8. Watch and interact as you see Java code go through a weight loss program right in front of your eyes.
Functional programming has gained the recognition it deserves and almost all mainstream languages now support functional style of programming. With the introduction of lambdas in Java 8, we have new tools on our hands. In this presentation, we explore this tool and how we can benefit from it.
We start with an introduction to lambda and learn about imperative vs. declarative style and learn how to solve some common problems with it.
We all have heard about lambda expressions in Java 8. The real unsung hero of Java 8 are the Streams.
In this presentation we will take an example driven approach to explore streams, their core capabilities form the application development point of view, and how to explore these capabilities for improve performance and efficiency.
Big data, smart phones, tablets, wearable devices, multicore processors,…—we're seeing rapid and drastic
evolution in both the computing environments and the application needs. Thankfully, a number of technologies have been evolving over the past several years to meet these demands. The reactive manifesto is an attempt to reenforce a set of technical practices needed to meet the emerging demands.
If you've heard and read about reactive programming, but are eager to realize that in code, then this presentation is for you. We will start with a rapid introduction and get into code to learn about reactive programming, not in theory but with practical examples.
In this session, we'll dig deep into the performance aspects of JavaScript and the Web Browser. Single page web applications are becoming popular very quickly, and understanding the low-level and high-level aspects of the browser platform and JavaScript runtimes embedding in them are important.
We'll cover topics such as browser pipe-lining, memory management, testing and measuring performance.
The usage of JavaScript on the server is rising. In this session, we'll talk about all the tools and utilities that comprise a modern JavaScript application, from a server-side point of view.
The usage of JavaScript on the command line and server has exploded over the past couple of years. Node.js has become popular with development shops from startups to big corporations. With its asynchronous nature, JavaScript provides the ability to scale dramatically. Along with the ability to drive your server-side applications, there are a number of tools that help with all aspects of browser development: testing, packaging, and deployment. In this session, we'll explore these tools and show you how you can incorporate these into your environment.
JavaScript has first class support for functional programming. There are many techniques for writing code in more clear, and concise form, in JavaScript using these techniques. In this session, we'll explore these techniques and talk about how to implement them. We'll use commonly used libraries like underscore to make elegant JavaScript code.
JavaScript has first class support for functional programming. There are many techniques for writing code in more clear, and concise form, in JavaScript using these techniques. In this session, we'll explore these techniques and talk about how to implement them. We'll use commonly used libraries like underscore to make elegant JavaScript code.
React.js is a view library from Facecbook for building performant user-interfaces in JavaScript. In this session, we'll explore React.js and understand why it's a great step forward for building complex UI's that run fast. We'll code up an example web application using React.js and step through the basics of using the library while discussing concepts like the virtual DOM and components.
In this session, we'll explore React.js and understand why it's a great step forward for building complex UI's that run fast. We'll code up an example web application using React.js and step through the basics of using the library while discussing concepts like the shadow DOM and components.
Jasmine is a browser centric testing framework. It's the default test framework in Jasmine and is the most popular framework among JavaScript developers. It takes a BDD testing approach.
Mocha is a popular testing framework for JavaScript - for any JavaScript environment, including the Web Browser, NodeJS, and Titanium. It allows for simple asynchronous testing, test coverage, and integration to CI tools. In this session, learn all you need to know about getting started for writing beautiful tests in JavaScript for these environments. We’ll discuss a number of add-ons for Mocha to make testing a breeze. We’ll talk about “should” and “chai” for expectation matching. We’ll discuss “simon” for mocking in your test cases. We’ll also talk about test automation with Grunt.
This session is a code-driven class that covers the Jasmine and Mocha JavaScript testing library. It is an introductory level session.
“Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.” Docker creates containers that provide running process with:
It does this by leveraging low-level Linux kernel primitives like cgroups and namepaces. The end result is a portable application container that can run anywhere Docker can run, including on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.
Containers are an excellent way to package your application such that it can run consistently everywhere you want to run it, a fantastic step toward Continuous Delivery. In this session we'll look at how to use Docker to package, deploy, and run Java applications and other services. We'll also compare Docker to another container solution, Warden, which is a key component of the Cloud Foundry PaaS.
Building and running container images isn’t enough. There are very real problems that we still have to solve:
These concerns are the focus of much of the development work being done in the Docker ecosystem today.
We’ll examine the use of several projects and how they contribute to solutions to these problems, including:
Visibility is one of the primary characteristics of applications that aren’t just coded well, but run well in production. We need visibility to understand:
In this talk we’ll look at the three disciplines of monitoring, metrics, and logging, and see how properly used, they can dramatically increase our system’s inherent visibility.
Topics will include:
Modern applications are changing as we embrace the engineering practices associated with Continuous Delivery and DevOps, migrate our applications to modern cloud platforms, elastically scale applications with the dynamics of customer demand, and embrace microservices architectures. The Twelve-Factor App is a collection of application development patterns developed by Heroku engineers that aim to support these types of architectural and cultural change.
The 12 Factors are:
We’ll examine how to implement these factors using JVM “microframeworks” like Spring Boot and Dropwizard.
Go is a very interesting language, open-sourced by Google in late-2009, that takes a “less is more” (http://commandcenter.blogspot.de/2012/06/less-is-exponentially-more.html) approach to language design, but that also bakes in a powerful concurrency model.
This talk will introduce Go, delve into its distinctives, and contrast its approach with that of Java (where appropriate). We'll also write a fair amount of Go code along the way. This talk will be of particular interest to Java developers looking to add Go to their toolkits, but will also be of interest to anyone looking to learn a little bit more about Go.
Topics will include:
The first part of the Continuous Delivery workshop covers the differences between continuous integration, continuous deployment, and continuous delivery). It also introduces the deployment pipeline_, along with usage, patterns, and anti-patterns. This part concludes with some applied engineering principles.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
The first part of the workshop describes the technical differences between related topics such as continuous integration, continuous deployment, and continuous delivery. At the heart of the workshop is a pattern called the deployment pipeline, which involves the creation of a living system that models your organization's value stream for delivering software. I discuss the various stages, how triggering works, patterns and anti-patterns, and how to pragmatically determine what “production ready” means. This session also covers some agile principles espoused by the Continuous Delivery book, including new perspectives on things like developer workstations and configuration management.
Continuous Delivery relies on a variety of interlocking engineering practices to work efficiently; this session covers three related topics. First, I cover the role of testing and the testing quadrant. Second, I specifically cover version control usage and offer alternatives to feature branching like toggle and branch by abstraction. Third, I describe some incremental release strategies, along with their impact on other stages of project lifecycle.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
Continuous Delivery relies on a variety of interlocking engineering practices to work efficiently; this session covers three related topics. First, I cover the role of testing and the testing quadrant, including the audience and engineering practices around different types of tests. I also cover some best practices around testing, including testing ratios, code coverage, and other topics. Second, I specifically cover version control usage and offer alternatives to feature branching like toggle and branch by abstraction. Generally, I talk about building synergistic engineering practices that complement rather than conflict one another. In particular, I discuss why feature branching harms three other engineering practices and describe alternatives. Third, I describe some incremental release strategies, along with their impact on other stages of project lifecycle.
Two big stumbling blocks for Continuous Delivery adaptation are interactions with operations and the keepers of data. First in this session, I cover operations, DevOps, and programmatic control of infrastructure. Second, I discuss how to incorporate databases and DBA's into the Continuous Integration and Continuous Delivery process.
Releasing software to actual users is often a painful, risky, and time-consuming process. This workshop sets out the principles and technical practices that enable rapid, incremental delivery of high quality, valuable new functionality to users. Through automation of the build, deployment, and testing process, and improved collaboration between developers, testers and operations, delivery teams can get changes released in a matter of hours—sometimes even minutes—no matter what the size of a project or the complexity of its code base. The workshop materials are derived from the best selling book Continuous Delivery and creating in collaboration with the authors and other of my ThoughtWorks colleagues. Continuous Delivery details how to get fast feedback on the production readiness of your application every time there is a change—to code, infrastructure, or configuration.
Two big stumbling blocks for Continuous Delivery adaptation are interactions with operations and the keepers of data. First in this session, I cover operations, DevOps, and programmatic control of infrastructure using tools like Puppet and Chef. I also discuss the explosion of tool alternatives in this space, and cover some current-day best practices. Second, I discuss how to incorporate databases and DBA's into the Continuous Integration and Continuous Delivery process. This includes database migrations, strategies for enhancing collaboration between application development and data, and database refactoring techniques.
This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
Continuous Delivery is a process for automating the production
readiness of your application every time a change occurs – to code,
infrastructure, or configuration. In the Continuous Delivery world,
rather than hone skills at predicting the future via Big Design Up
Front, Continuous Delivery emphasizes techniques for understanding
and changing code with less cost during the process. Some architectures
and engineering practices yield better designs for this
environment. This multi-disciplinary session takes a deep dive into
the confluence of topics required to fully understand the intersection
of Continuous Delivery and architecture, including evolutionary
architecture and emergent design, with an emphasis on how
architectural decisions affect the ease in changing and evolving your
code, the role of metrics to understand code, how Domain Driven
Design's Bounded Context reifies in architecture, how to reduce
intra-component/service coupling, and other techniques.
In this session, we will take a look at Angular - the powerful MVVM SPA framework from Google. We will discuss some of the terminology that Angular offers, and see how we can use that to develop highly interactive, dynamic web applications. See “Detail” for a list of topics I cover and the Github repo URL
In this session we will take a look at Angular and using it to develop rich web applications. Angular embraces HTML and CSS, allowing you to extend HTML towards your application, and uses plain JavaScript which makes your code easy to reuse, and test.
Note: This is an intro level talk. It is targeted towards developers who are curious about Angular and want to learn about the fundamental features and concepts in Angular.
Topics Covered -
ng-app
ng-init
and the evaluation {{ }}
directive$rootScope
and scoping rulesng-model
ng-repeat
ng-form
, form validation and submission in AngularJSng-messages
to display form validation messages to the userGitHub URL - https://github.com/looselytyped/angudone-workshop/tree/solutions
In this session, we will take a look at Angular - the powerful MVVM SPA framework from Google. We will discuss some of the terminology that Angular offers, and see how we can use that to develop highly interactive, dynamic web applications. See “Detail” for a list of topics I cover and the Github repo URL
In this session we continue our discussion from Part I. As we continue to evolve our application we will seek to use, and understand a few more of AngularJS' core constructs.
ng-view
and $routeProvider
$http
If time permits we will look at a few good practices when developing AngularJS applications, ways to modularize your code, and some tools that aid in the development of AngularJS applications.
In this session we will look at some JavaScript patterns, and how you can use them within your code.
No longer can you think of JavaScript as a kiddie-scripting language. JavaScript has now been promoted to a first-class citizen within your application - and with this, comes the need to think of better abstractions, code re-use - even thinking of establishing a common vocabulary to discuss approaches to writing better, cleaner and ultimately more maintainable JavaScript code.
It was over 10 years ago that Spring hit the scene and made a big impact in the enterprise Java development ecosystem. Now that Spring 4.2 is available (and Spring 5 on the way), there's a lot of new features and a lot that you may not know about yet.
Whether you're already working with Spring 4 or are anxious to make a move up, there's plenty of new tricks Spring has in store for you. We'll explore them all in this overview of everything that's new in Spring.
You wouldn't write your entire application in a single main() method or servlet. Nor would you develop an entire production-ready application in a single class. It's even unlikely that you'd cram everything into a single package.
Modularity helps us gain order in our code, breaking it into easily digestible, refactorable, pluggable, and testable chunks. Classes and methods are a form of modularity that we're all familiar with. But once the code is built, modularity goes away and we're left deploying a single WAR file.
Aside from being buzzword-compliant, Microservices are a means of defining entire systems from composable, but distinct deployment units gaining all of the benefits of finer-grained modularity. As it turns out, Spring is well-equipped as the platform on which we can build and deploy microservices.
In this session, we'll examine the details of microservices and explore the features of Spring, Spring Boot, and Spring Cloud that enable you to achieve modularity via microservices.
How do you test a Spring application? The easy answer is that Spring encourages loose-coupling through interfaces and dependency injection, therefore it's easy to inject mock implementations at test time.
But, as I said, that's the easy answer.
There's more to testing an application than simple unit-testing. And the most challenging parts of an application to test are the external edges such as the web and database portions. It's difficult to inject a mock user into a web application; and injecting a mock database doesn't give any confidence that persistence code will work with a real database.
In this session, we'll look at various ways that Spring and Spring Boot help with testing the tough-to-test portions of an application.
The Semantic Web and its related technologies provide an incredibly powerful model for driving the cost of data integration down to nearly zero. So, how do we deal with developers who are overwhelmed, frightened or annoyed by its data models and formats?
We really do not have to devolve into Webs of Haves and Have nots when it comes to semantically rich, interoperable data and modern application tools, frameworks and user interfaces. There is a surprisingly simple mechanism by which “normal” developers can benefit from the power of the Semantic Web and the latter's developers can integrate with the panoply of tools and toys under constant development by the former.
The trick is JSON-LD. A simple, but deliberately designed extension to JSON that bridges both worlds and is finding its way into many other uses by the likes of Google and GitHub.
We will learn about:
Encryption is a powerful tool for privacy. At least that is what we're meant to think.
If you consider encryption to be a black box of magic, you should probably attend this talk.
If you think encryption will protect your secrets, you should probably attend this talk.
If you have (or haven't) been following the news, you should probably attend this talk.
The truth is, encryption can be an effective way of making it harder for people to steal your secrets. But it isn't magical, it isn't fool proof and, depending on how you are using it, may be completely useless. It is a hard topic that we'll only touch the surface on, but there are very few topics that are more crucial for our industry and profession to understand better than encryption. You don't have to understand the math (although that will help), but you do have to understand what it will and won't do for you*.
*and how implementations of it may have been intentionally compromised
If you're not terrified, you're not paying attention.
Publishing information as webs of data does not require us to just give it away. We have a series of tools and techniques for managing identity, authentication, authorization and encryption so we only share content with those we trust.
Before we tackle Web Security, however, we need to figure out what we mean by Security. We will pull from the worlds of Security Engineering and Software Security to lay the foundation for technical approaches to protecting our web resources.
The cost of integrating information isn't cheap. Well, at least it isn't if you do it wrong. Chances are, you're doing it wrong.
The single most difficult aspect of data integration is the effort to achieve consensus. It isn't just that we are disagreeable people. It's also that it is a fantasy that there is a “common model” or a “global truth”. Different groups and individuals see the world differently and have different needs from information systems. Language, and therefore what we call things, isn't simply reflective of reality. It plays a constructive and interpretive role.
The problem is that our technologies force us to make choices about world views and pretend that things aren't changing constantly. This yields fragile systems and high impedance to change that cascades through our organizations. This translates to expensive, rigid and difficult to extend failure to give the business what they want.
Our friend Tim Berners-Lee and his Happy W3C Merrymakers have given us a set of technologies to help us solve these problems though. We forget that the Web he designed was not the public Web, but one to solve integration needs for complex organizations like CERN. The HTML bit that we have gotten so excited about is but a small part of the vision. We will introduce RDF and SPARQL as enabling technologies. They do not necessarily replace what you already have, but they do make it possible to share information with people you've never talked to: Collaboration without Coordination.
“Computers are useless. They can only give you answers.” –Picasso
If everything is explicitly described in uniform, robust and comprehensive data models, it is easy to imagine how to ask questions of the data. The social, technical and financial costs of getting the information into that state, however, will keep that from ever being a reality. So, how can we integrate and reason over choppy and sloppy data from multiple sources in a variety of formats? We will see how the RDFS and Web Ontology Language (OWL) W3C standards help us connect and reason over content that leaves things unsaid without writing a bunch of custom code.
This is not artificial intelligence, but it is influenced by work done in that field.
A Technology Radar is a tool that forces you to organize and think about near term future technology decisions, both for you and your company. This talk discusses using the radar for personal breadth development, architectural guidance, and governance.
ThoughtWorks Technical Advisory Board creates a “technology radar” twice a year, a working document that helps the company make decisions about interesting technologies and where we spend our time. ThoughtWorks then started conducting radar-building exercises for our clients, which provides a great medium for technologists company-wide to express their opinions about the technologies they use every day. For companies, creating a radar helps you document your technology decisions in a standard format, evaluate technology decisions in an actionable way, and create cross-silo discussions about suitable technology choices. This session describes the radar visualization and how to conduct a radar building session for yourself. After a brief introduction, the bulk of the workshop consists of attendees building a radar for the group, following the same procedure you'll use when you do this exercise at your company. At the end, we'll have created a unique Radar for this event and practiced doing it for yourself.
Building a PaaS
Building a PaaS
The drive to continuous delivery, micro services and PaaS includes the need to auto scale and potentially auto provision services. This session will identify the legacy thinking of a statically partitioned world and drive to the scalable world of Apache Mesos.
This session will look at the failings of the many of today's cloud technology, identify the goals we have and look into the tooling of how to get there. In this session we will look at:
This session will be packed with demonstrations.
Next Generation Web Applications -> Aggregation of Resources
Next Generation Web Applications -> Aggregation of Resources
The next version of JavaScript, defined by ECMAScript 6 (ES6), is packed with great, new features. There are many more significant additions than were provided in ES5. Firefox and Chrome are aggressively adding support for these and IE11 supports some, but it may be years before all popular browsers fully support ES6. The good news is that you don't have to wait to use ES6 features.
You can use a transpiler today to convert ES6 code to ES5, enabling it to be run in current browsers and with Node.js. This process can be automated to occur whenever JavaScript file modifications are saved from any editor/IDE.
This talk will describe tooling that enables this and provide an overview of many of the new features. These include block scope, default parameter values, destructuring, arrow functions, classes, template strings, promises, modules, iterators, generators, and more. After learning about these features, you won't want to wait to start using them!
Functional programming is receiving much attention recently. Most mainstream languages, including C++, Java, and C# now support lambda expressions and functional style. Why is there a sudden interest in adapting something that was largely ignored for decades? What are the technical reasons, how do we benefit, in what ways can using these techniques make our applications better and our development efforts easier as well?
In this technical presentation we will learn about the technical underpinnings of functional programming, the fundamental problems they address, and the large impact of those on application development
In this session, we'll see how to build real Spring applications using Spring Boot. We'll also look under the covers to see what makes Spring Boot tick.
Spring offers a number of configuration options: XML configuration, Java configuration, and Groovy configuration to name a few. To some degree, component-scanning and autowiring help eliminate some explicit configuration. But in general most Spring applications require some essential “bootstrap” configuration to enable key functionality. What's the right way to build Spring applications when there are so many choices?
What if I told you that configuration was optional?
Spring Boot is an exciting new programming model for Spring that makes it extremely easy to create stand-alone, production-ready Spring applications. Rather than writing lots of code to satisfy the needs of a framework, Spring Boot helps you focus your coding efforts on your application. Spring Boot takes an opinionated approach to configuring Spring, making it possible to create Spring applications with little or, in some cases, no Spring configuration at all!
Being a professional software engineer, it's easy to fall into the belief that one's role in a company is to write code.
Another perspective might be that one's role is to solve problems for the business and that writing code is merely one of several tools available to help solve those problems.
There are numerous problem-solving “anti-patterns” that are rampant in the industry today. “Forewarned is forearmed” as they say. In addition to highlighting these “anti-patterns” with real-life examples and the (sometimes) disastrous consequences, Michael asks some of the difficult questions about our true motivations for our decisions and how our decisions can either positively or negatively affect our team and our organization.
Conciseness, expressiveness, and fluency, among other things, are some of the core strengths of Scala.
Beyond the language capabilities, the Scala library has a wealth of capabilities to make expressive
and concise code quite efficient. This combination of benefits comes from the functional side of Scala where function composition and lazy evaluations are common place.
In this presentation we will learn about being lazy and, by doing so, how we can make the code quite efficient both from the point of view of expression and execution.
Jamie Zawinski once said “Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.“. Many consider regular expressions to be indecipherable, but the truth is that every programmer should consider regular expressions an integral part of their toolkit. From the command line to your favorite text editor, from parsing user input to scraping HTML pages - once you know regular expressions you will find a use for them in almost every programming context.
In this session we will attempt to unriddle the mystery that regular expressions pose. We will start at the basics and work our way towards more complex expressions.
We started with documents on the Web. Then we realized we could share data using the same technologies. And then services. Concepts were next to help us organize all of this information. And now, we want Things on the Internet and Web too. Fortunately, we already have much of the infrastructure to welcome all of these new physical information producers and consumers to the party.
This talk will be a high level overview of how the Internet of Everything scales to accept entire new classes of devices and how we make sense of it all. We will also look toward the future of where we are going next.
Becoming more agile
Becoming more agile
Big up front design is discouraged in agile development. However, we know that architecture plays a significant part in software systems. Evolving architecture during the development of an application seems to be a risky business.
In this presentation we will discuss the reasons to evolve the architecture, some of the core principles that can help us develop in such a manner, and the ways to minimize the risk and succeed in creating a practical and useful architecture.
Creating code is easy, creating good code takes a lot of time, effort, discipline, and commitment. The code we create are truly the manifestations of our designs. Creating a lightweight design can help make the code more extensible and reusable.
In this presentation we will take an example oriented approach to look at some core design principles that can help us create better design and more maintainable code.
Before spending substantial effort in refactoring or altering design, it would be prudent to evaluate the current quality of design. This can help us decide if we should proceed with refactoring effort or a particular alteration of design. Furthermore, after evolving a design, using some design metrics would help us to evaluate if we have improved on the design front.
In this workshop we will learn about some critical qualities of design and how to measure those. We will learn about these by working through some example code, refactoring it, and evaluating the design again at each stage of refactoring.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git
Before spending substantial effort in refactoring or altering design, it would be prudent to evaluate the current quality of design. This can help us decide if we should proceed with refactoring effort or a particular alteration of design. Furthermore, after evolving a design, using some design metrics would help us to evaluate if we have improved on the design front.
In this workshop we will learn about some critical qualities of design and how to measure those. We will learn about these by working through some example code, refactoring it, and evaluating the design again at each stage of refactoring.
-Java 8 JDK or later version
-Your favorite IDE (Preferably IntelliJ IDEA Community Edition)
-git