Apache Kafka is the core of an amazing ecosystem of tools and frameworks that enable us to get more value from our data. Let's take a journey through this wonderful world and see what we can learn.
In this session, we'll have a gentle introduction to Apache Kafka, and then a survey of some of the more popular components in the Kafka ecosystem. We will be looking at the Kafka Producer and Consumer libraries, Kafka Connect, Kafka Streams, the Confluent Schema Registry, and ksqlDB. You will leave loaded with ideas of ways to put Kafka to use in your organization, and a list of resources to help you on your journey.
Some say that data is the new oil, and like oil, data needs to be collected, refined, and distributed to gain value from it. For years we've collected data into tankers called databases, and when we had enough, we shipped batches of it to be refined by some process. Then when that was done, we'd make it available to end-users. However, unlike oil, data tends to lose its value the longer we wait to use it. (Data is the new milk?)
So, what if we could hook up a pipeline* to the source of the oil, er… data, and transform it in near real time, on its way to the end-users? That's precisely what we can do with Apache Kafka and Kafka Streams. In this session, we’ll see how we can use Kafka Streams to enrich, aggregate, summarize, and otherwise transform our data while it is flowing through our systems.
We live in a world of microservices. Yet, what is a microservice? What defines the boundaries of a microservice? How do we define the relationships between microservices? Thankfully domaindriven design gives us the concepts and practices to better design and decompose our services.
In this session we will consider many of the concepts of DDD — How bounded contexts use Ubiquitous language to model the domain, how context maps can be used to establish the interconnections between services as well aggregates and domains events, all of which will service us well as we go about creating our microservices.
In this session we will go further with DDD, considering aspects like aggregates, domain events, factories and repositories, and consider how DDD can be represented in architecture, be that hexagonal, REST or RPC.
If you need a deeper dive into DDD, then this is the session for you.
Continuous refactoring is critical to succeeding in projects and is an important part of sustainable agile development.
In this workshop, we will start by discussing how to approach refactoring, the essential steps we need to take, and look into how to incrementally improve the internal design of code to make it extensible, maintainable, and cost-effective to change. In addition to discussing the concepts, we will take several code examples from real projects, discuss the code smells and explore the refactoring techniques. Along the way, we will also dive into refactoring short code samples and measure the quality of code before and after refactoring.
Computer with git client to access git repository.
Java 8 or newer
Your favorite IDE
Continuous refactoring is critical to succeeding in projects and is an important part of sustainable agile development.
In this workshop, we will start by discussing how to approach refactoring, the essential steps we need to take, and look into how to incrementally improve the internal design of code to make it extensible, maintainable, and cost-effective to change. In addition to discussing the concepts, we will take several code examples from real projects, discuss the code smells and explore the refactoring techniques. Along the way, we will also dive into refactoring short code samples and measure the quality of code before and after refactoring.
Computer with git client to access git repository.
Java 8 or newer
Your favorite IDE
Event-driven architectures look great on paper (or a whiteboard), but it's easy to run into snags when trying to implement them. In this talk, we'll see how tools from the Apache Kafka ecosystem can make this task easier and a lot more fun!
In this session, with Apache Kafka as the backbone, we'll use Micronaut to implement an event-driven architecture that is loosely coupled and easily extensible. Then we'll extend it with a Kafka Streams application without affecting any of the existing applications.
You have been using Git for a while. You know how to stage and commit your work, create and delete branches and collaborate with your team members using remotes. But Git often leaves your confused — ever committed to your work to the wrong branch? Even worse, ever accidentally delete a branch that you needed to keep around? And what is God's good name is “Detached HEAD state”? Why tag commits, when we have branches? Is there a better work-flow than just using merges? What's the difference between a merge and a rebase?
The answer to all of these questions, and more, lies in the constitution of a commit, and the directed acyclic graph (DAG) that Git uses to manage your history. This, right here, is the key to understanding everything in Git.
In this hands-on workshop, we will level up your Git skills. We will foray into the underbelly of Git, and reveal the mystery behind the arcane interface that is the Git CLI.
By the end of this workshop, you will have a keen understanding on how best to use Git, as well as know how to dig yourself any prickly situation you might find yourself in. You will become your team's hero(ine). Most importantly, you will walk away with a keen appreciation of how beautiful and elegant Git really is.
You have been using Git for a while. You know how to stage and commit your work, create and delete branches and collaborate with your team members using remotes. But Git often leaves your confused — ever committed to your work to the wrong branch? Even worse, ever accidentally delete a branch that you needed to keep around? And what is God's good name is “Detached HEAD state”? Why tag commits, when we have branches? Is there a better work-flow than just using merges? What's the difference between a merge and a rebase?
The answer to all of these questions, and more, lies in the constitution of a commit, and the directed acyclic graph (DAG) that Git uses to manage your history. This, right here, is the key to understanding everything in Git.
In this hands-on workshop, we will level up your Git skills. We will foray into the underbelly of Git, and reveal the mystery behind the arcane interface that is the Git CLI.
By the end of this workshop, you will have a keen understanding on how best to use Git, as well as know how to dig yourself any prickly situation you might find yourself in. You will become your team's hero(ine). Most importantly, you will walk away with a keen appreciation of how beautiful and elegant Git really is.
The web is arguably the single most impactful revolution in human history (to date). By agreeing on a simple set of standards, we have collectively unlocked all the world's information. Documents can be discovered, retrieved, published, and shared so easily we don't even think about it.
Data, on the other hand, is a different story. Our data remains stuck in the 1980s. Locked in silos, each with a different format, interface, and conventions that must be interpreted by a human, parsed, mapped, and converted. Data is at the heart of many problems we solve today, and we produce data exponentially faster than we can consume it.
Today I can request any document from any server on the web. I need to know nothing about the underlying technology the server uses, nothing about how the information is stored or retrieved, and consume it instantly. We've been evolving those same capabilities with data over the past 20 years and the standards, tools, and technologies are reaching critical mass. The linked data revolution is now one that you can no longer ignore. Join us to see what you've been missing.
Completely Rewritten for 2023
Although the Resource-Oriented Architecture is one of the oldest and most successful distributed architectures, it remains poorly understood and often completely overlooked today.
Much of the microservices architecture pattern is focused on taking applications apart although seemingly everyone has different ideas on how to put things back together again.
In this session, we will start with a summary of the resource-oriented architecture along with it's strengths and weaknesses. Michael will share hard-earned, real-world experience applying ROA concepts to complex microservice environments to successfully build an infinitely scalable, extensible, and understandable system.
Mob Programming is a style of programming in which the entire team sits together and
works on a single task at a time. Teams that have worked this way have found that
many of the problems that plague normal development just melted away, possibly because communication and learning increases. Teams also find that the quality of their code increases. They find their capacity to create increases. However, the best part of all this is that teams end up being happier and more cohesive.
In this session we introduce the core concepts of mob programming and then get handson mobbing on a coding kata.
Integration, once a luxury, is now a necessity. Doing this well, however, continues to be elusive. Early attempts to build better distributed systems such as DCOM, CORBA, and SOAP were widely regarded as failures. Today the focus is on REST, RPC, and graphql style APIs.
Which is best? The goto answer for architects is, of course, “it depends.”
In this session, we look at the various API approaches, how they attempt to deal with the challenge of decoupling client from server, evolvability, extensibility, adaptability, composability.
The biggest challenge is that needs change over time, and APIs must necessarily evolve. Versioning is challenging, and breaking changes are inevitable. You'll leave this session with a highlevel understanding of these approach, their respective tradeoffs and ultimately how to align your API approach with your architectural and organizational goals.
In Part 1, you learned the core principles of influence and persuasion. How to we take this back to the office and apply what we've learned?
We dive deep in to specific strategies to get both the team and the business on board with your ideas and solutions. We cover several realworld patterns you can follow to be more effective and more persuasive. Part 1 was conceptual, part 2 is practical.
By the end of this conference you will have learned many new tools and technologies. The easy part is done, now for the hard part: getting the rest of the teamand managementon board with the new ideas. Easier said than done.
Whether you want to effect culture change in your organization, lead the transition toward a new technology, or are simply asking for better tools; you must first understand that having a “good idea” is just the beginning. How can you dramatically increase your odds of success?
You will learn 12 concrete strategies to build consensus within your team as well as 6 technique to dramatically increase the odds that the other person will say “Yes” to your requests.
As a professional mentalist, Michael has been a student of psychology, human behavior and the principles of influence for nearly two decades. There are universal principles of influence that neccessary to both understand and leverage if you want to be more effective leader of change in your organization.
In this session we discuss strategies for getting your team on board as well as when/how to approach management within the department and also higherup in the organization.
On the NFJS tour, there are questions that seem to come up again and again. One common example is “How do we determine which new tools and technologies we should focus our energy on learning?” another is “How do we stop management from forcing us to cut corners on every release so we can create better and more maintainable code?” which, after awhile becomes “How can we best convince management we need to rewrite the business application?”
There is a single metaanswer to all these questions and many others.
It begins with the understanding that what we as engineers value, and what the business values are often very different (even if the ultimate goals are the same) By being able to understand these different perspectives it's possible to begin to frame our arguments around the needs and the wants of the business. This alone will make any engineer significantly more effective.
This session picks up from where “Stop writing code and start solving problems” stops discussing what is value, how do we align the values of the business with the needs and values of the engineer.
Over the last decade, DevOps has emerged as an influential business philosophy and practice, helping businesses drive high quality software to market faster. DevOps focuses on the elimination of bottlenecks that occur when development and operational resources are too divorced from one another. But what about friction in the development and test process? What about the delayed feedback cycles that come from slow builds and test flakiness? How can we reduce friction in areas that are outside of the focus of DevOps? Raju Gandhi, Developer Advocate at Gradle Enterprise, will talk about how the emerging practice of DPE addresses these questions. The presentation will include examples of DPE practices in action from Java projects using the Maven or Gradle build tool.
Attendees will walk away from this presentation with a better understanding of:
In recent years the development of the Java language has accelerated significantly. Some of the more recent features are quite exciting and can lead to better modeling, fewer errors, less verbosity, and better developer productivity. In this presentation we will take a close look at these features and their significance. We will use concrete examples as use cases to explore the modern features. In addition to looking at these features we will also take a look under the hood at how these are implemented. This will help us to gain a better understanding of the builtin efficiencies these features come with.
Get ready to take part in some live coding as we dive into these amazing features.
Functional and reactive programming is gaining popularity and use. One hurdle developers face using these approaches is in exception handling. Dealing with exceptions in these styles is confusing in the beginning and is often messy and error prone. In this presentation we will step back and take a closer look at exception handling, about the functional and reactive programming and discuss the dos and don'ts for exception handling. We will learn using concrete examples and live coding to illustrate the problem and the possible solutions.
We will explore options available in different languages and libraries and learn from them
Have you looked at the methods of CompletableFuture and wondered how to actually use them? Do the function names appears strange or at least different from what we're used to. Let's take another look so those names become easier to remember and also to make use of.
In this presentation we will take a practical look at CompletableFuture and how to make good use of it, and how to deal with exceptions.
Albert Einstein once said — “We cannot solve our problems with the same thinking we used when we created them.”
As developers, architects, managers, tech-leads, our job entails solving big problems. However, our systems are composed of parts, and it is the interaction of those parts that ensure the system works, and tells us where to go looking when things fail. What does it mean to think in systems? What are the benefits, and what are tools are available to us? If any of this intrigues you (and it should) come join me as we explore a new way of thinking about problems, both big and small.
In this session we will take a page from Donella H. Meadows fantastic book “Thinking in Systems” and attempt to get an understanding of what makes up a system, how we can tease apart the moving parts of the system, and get into the mindset of a “systems thinker”.
Embraced containers yet? If so, that only presents the beginning of the journey. Designing your images to be lean, and your containers configurable requires us to leverage Dockerfiles to their maximum potential. At scale, everything matters—build times, testing, multi-stage builds, conventions around tagging and logging. There is a whole ecosystem of tools around how we can best build our images and containers.
In this session we will learn many a trick on how we can leverage Docker's own tooling as well as third-party tools to ensure that our first steps in the container world are the right ones.
Architecture is not a static representation of a system. There are several complexities and risks involved in creating them. One way to mitigate the risk is to evolve the architecture. But, there are risks of evolving as much as there are the risks of not evolving. In this interactive workshop we will explore a set of practices that we can use to mitigate the risks. Then we will dive into discussing some common and popular architectural patterns.
Finally, we will take some example applications and discuss how to evolve architecture to meet the needs of those applications.
Computer with git client installed to access the version control system which will have lab related material.
Architecture is not a static representation of a system. There are several complexities and risks involved in creating them. One way to mitigate the risk is to evolve the architecture. But, there are risks of evolving as much as there are the risks of not evolving. In this interactive workshop we will explore a set of practices that we can use to mitigate the risks. Then we will dive into discussing some common and popular architectural patterns.
Finally, we will take some example applications and discuss how to evolve architecture to meet the needs of those applications.
Computer with git client installed to access the version control system which will have lab related material.
There are some amazing features that are being introduced in Java in the coming years. Some of these features are just around the corner. However, these features, in various forms, have been in other languages for a while.
In this presentation we will take a look at some features in Kotlin, Scala, and JavaScript that are soon to be features of Java, understand their power and benefits and how we can make use of them. We will relate these features to how they are manifesting in Java.
The power of Java's functional programming goes far beyond lambdas and the filter, map of the Stream API. In this presentation we will look at various data transformations and see how the functional APIs in the JDK can provide concise solutions for these.
We will start with the Stream's API and quickly dive into some newer functions and also the utilities of the Collectors. Along the way you will get a deeper understanding of some of the excellent functions that are hidden in plain sight.
Many developers around the world have jumped on the Microservices bandwagon. Several organizations, in the past and also in the present, have built around monolithic and also service oriented architectures. What separate, however, Microservices from those efforts at large? How does a Microservices based architecture influence the design of the software applications, what are the key design principles we should keep in mind, and how to leverage design goals like reuse and extensibility?
In this presentation we will revisit the design goals we often focus on in software development and how those evolve in the context of Microservices.
Game of Life is an intriguing game. At first look it looks simple, but as you look closer, it appears to be quite complex. How can we implement this game with different constraints, what are the constraints? Is it possible to use functional programming for this, to honor immutability? You see, it is intriguing.
We will discuss the constraints, think about how we may be able to solve them, and along the way discover how functional programming can play a role. We will have a fully working program, using live coding, at the end of this session, to illustrate some nice ideas that will emerge from our discussions.
Secure, Efficient, Resilient, High-performing, Sustainable, and Cost-effective
Are your applications well-architected? This talk will explore the best practices for operational excellence, Security, Reliability, Performance Efficiency, and cost optimization. Think of systems and services which provide business values. Do you know if all of these services are well-architected? You will learn how to create mechanisms, a repeatable process that allows you to improve over time. We will explore the best practices using real-world examples to make them more concrete and actionable.
Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for various applications and workloads. They are built around six pillars—operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability.
Join expert Rohit Bhardwaj to gain the knowledge and skills you need to solve current cloud implementation problems.
What you'll learn — and how you can apply it
By the end of this live, hands-on, online course, you'll understand the following:
– How to create responsive, maintainable, extensible architecture
– How to manage identities for people and machines and understand the significance of role-based, service-based, and attribute-based access
– How to design network topology and protect your network resources
– How to design interactions in a distributed system to prevent failures and improve performance and resiliency
– How to select the best-performing architecture and choose performant storage and databases
– How to manage demand and supply resources
– How to take advantage of user behavior patterns to support your sustainability goals
Topics covered:
Design Principles
– Scaling patterns
– Architecture Design Principles
– Capacity calculations
– Impact of data on design decisions
– Shared Responsibility Model
Reliability
– Resilient Architecture principles
– Herds of complex real-time distributed systems
– Hands-on Exercises / Case Studies
– Blast radius- fault isolation to protect your workload
– Availability patterns
– Recovery Point Objective and Recovery Time Objectives
– Data backup data patterns
– Routing Strategies
– Service quotas and constraints
– Design your workload service architecture
– Failure management in a distributed system
– Monitoring workload resources
– Calculating the response times
– Fallacies of Distributed Systems
– Testing reliability
– Cost Optimization
– Design cost-optimized storage
– Cost-optimized compute
– Data transfer costs
– Manage demand and supply resources
– Hands-on Exercises / Case Studies
Sustainability
– User behavior patterns
– Data access and usage patterns
– Development and deployment processes
– Hands-on Exercises / Case Studies
Performance Efficiency
– Select the best-performing architecture
– Choosing performant storage and databases?
– No-SQL for performance
– Caching strategies
– DOS attacks
– Tradeoffs to improve performance
– Evolving your workload
– Handle skewed data
– CDN networks like Cloudfront to solve the caching requirements for static and Dynamic
contents
– Monitor and set alarms for performance and network issues
– Hands-on Exercises / Case Studies
Operational Excellence
– Principles for Perform Operation Infrastructure as code
– Annotate Documentation - PlayBooks - Part of code
– Create Runbooks - Server down
– Capture failures and analyze them using Events and Real-Time Actions
– KPIs for cloud dashboard
– Incidence response - Root Cause Analysis
– Hands-on Exercises / Case Studies
Security, Privacy, and Compliance
– Manage identities for people and machines
– Identify Access Management
Role-Based, Service-Based, and Attribute-Based Access
– Securely operate your workload.
– Detect and investigate security events
– Web Application Firewall
– Virtual Private Cloud - design network topology
– Protecting your network resources
– Bastion Hosts
– Data classification
– Protecting data in Transit
– Protecting data at Rest
Hands-on Exercises / Case Studies
Secure, Efficient, Resilient, High-performing, Sustainable, and Cost-effective
Are your applications well-architected? This talk will explore the best practices for operational excellence, Security, Reliability, Performance Efficiency, and cost optimization. Think of systems and services which provide business values. Do you know if all of these services are well-architected? You will learn how to create mechanisms, a repeatable process that allows you to improve over time. We will explore the best practices using real-world examples to make them more concrete and actionable.
Well-Architected helps cloud architects build secure, high-performing, resilient, and efficient infrastructure for various applications and workloads. They are built around six pillars—operational excellence, security, reliability, performance efficiency, cost optimization, and sustainability.
Join expert Rohit Bhardwaj to gain the knowledge and skills you need to solve current cloud implementation problems.
What you'll learn — and how you can apply it
By the end of this live, hands-on, online course, you'll understand the following:
– How to create responsive, maintainable, extensible architecture
– How to manage identities for people and machines and understand the significance of role-based, service-based, and attribute-based access
– How to design network topology and protect your network resources
– How to design interactions in a distributed system to prevent failures and improve performance and resiliency
– How to select the best-performing architecture and choose performant storage and databases
– How to manage demand and supply resources
– How to take advantage of user behavior patterns to support your sustainability goals
Topics covered:
Design Principles
– Scaling patterns
– Architecture Design Principles
– Capacity calculations
– Impact of data on design decisions
– Shared Responsibility Model
Reliability
– Resilient Architecture principles
– Herds of complex real-time distributed systems
– Hands-on Exercises / Case Studies
– Blast radius- fault isolation to protect your workload
– Availability patterns
– Recovery Point Objective and Recovery Time Objectives
– Data backup data patterns
– Routing Strategies
– Service quotas and constraints
– Design your workload service architecture
– Failure management in a distributed system
– Monitoring workload resources
– Calculating the response times
– Fallacies of Distributed Systems
– Testing reliability
– Cost Optimization
– Design cost-optimized storage
– Cost-optimized compute
– Data transfer costs
– Manage demand and supply resources
– Hands-on Exercises / Case Studies
Sustainability
– User behavior patterns
– Data access and usage patterns
– Development and deployment processes
– Hands-on Exercises / Case Studies
Performance Efficiency
– Select the best-performing architecture
– Choosing performant storage and databases?
– No-SQL for performance
– Caching strategies
– DOS attacks
– Tradeoffs to improve performance
– Evolving your workload
– Handle skewed data
– CDN networks like Cloudfront to solve the caching requirements for static and Dynamic
contents
– Monitor and set alarms for performance and network issues
– Hands-on Exercises / Case Studies
Operational Excellence
– Principles for Perform Operation Infrastructure as code
– Annotate Documentation - PlayBooks - Part of code
– Create Runbooks - Server down
– Capture failures and analyze them using Events and Real-Time Actions
– KPIs for cloud dashboard
– Incidence response - Root Cause Analysis
– Hands-on Exercises / Case Studies
Security, Privacy, and Compliance
– Manage identities for people and machines
– Identify Access Management
Role-Based, Service-Based, and Attribute-Based Access
– Securely operate your workload.
– Detect and investigate security events
– Web Application Firewall
– Virtual Private Cloud - design network topology
– Protecting your network resources
– Bastion Hosts
– Data classification
– Protecting data in Transit
– Protecting data at Rest
Hands-on Exercises / Case Studies
Knative is a container-oriented framework for “enterprise-grade serverless on your own terms.” Running in Kubernetes, on any cloud provider or your own data center, Knative offers two serverless capabilities: Serving and Eventing.
Serving is all about auto-scaling, not only up as load increases, but down as load drops off, including scaling all the way to zero.
This presentation is a gentle introduction to Knative.
In this talk, we will explore the following topics:
You will leave this talk with a solid foundation of Knative basics and a headful of ideas on how your team can roll your own serverless solution using Knative.
Serverless is becoming a de facto standard architectural approach, but too often we get bogged down in serverless compute as middleware. Thankfully, front-end developers now have easy access to serverless computing without being forced to become AWS or Azure cloud architects.
This talk will be a gentle introduction to serverless at the edge, not just in the front-end, but at the database as well.
After a bit of overview on what we mean by serverless at the edge, we'll dive into and demo four different approaches, covering both serverless for the frontend and databases:
Through these various scenarios, you'll gain a pretty good idea of the kinds of serverless options you have for the edge.
New to serverless? No worries! This will give you plenty of exposure to the key concepts involved.
Modern software development often leverages services of public cloud providers like AWS, Azure, and Google Cloud. Each vendor offers a robust set of web UIs and CLIs for infrastructure provisioning, but these are typically for one-off provisioning and don't allow for normal Software Development Lifecycle Processes like source control and code reviews.
This talk is a gentle introduction to Infrastructure as Real Code using an easy, but very capable, provider-agnostic tool called Pulumi.
Broadly speaking, Infrastructure as Code (IaC) solutions come in two categories. The first category uses a Declarative approach, typically using YAML or JSON. For AWS users, think CloudFormation. Also, approaches like Terraform, a very popular IaC toolchain from HashiCorp, offer a quasi-declarative approach with some richer dynamic capabilities. However, to a software developer, these all seem a bit foreign, using arcane, proprietary, and unfamiliar syntax and idioms.
Fortunately, the second category of IaC solutions uses an Imperative approach; i.e. real code like what developers are used to. To a development team, this can feel much more natural, fits into the normal SDLC processes, and offers many capabilities that will never be available from declarative IaC approaches.
This talk is a gentle introduction to Infrastructure as Real Code using an easy, but very capable, provider-agnostic toolchain called Pulumi.
In this talk, we will explore the following topics:
You will leave this session armed with the basics of Infrastructure as Code with Pulumi, a few recipes to actually provision resources, and a much better appreciation for how your team can start building out infrastructure using real code.
JavaScript now has a yearly release cycle. Every release introduces new features that aim to make us (JavaScript developers) more productive, while making our code succinct, expressive, and (potentially) bug-free.
By the end of this workshop you will be intimately familiar with the newer set of JavaScript features, and ready to dive into your next JavaScript project with confidence.
This workshop allows you to explore many of the new features that we use on a daily basis, including
let
and const
Map
/ Set
data-structuresclass
syntax as well as static/private members and methodsThis workshop uses tests to verify and validate all of your changes, so you can be confident in your changes as you go about refactoring your code to use modern JavaScript language features.
Each section of the workshop builds on the previous one, allowing you to continuously refactor code and get tons of practice.
If you want to be fully caught up with all of the changes that have landed in JavaScript between 2015 and 2021, this is the workshop you want to attend!
Please follow the instructions in the README.md file of https://github.com/looselytyped/refactoring-to-modern-javascript
NOTE — If you get an error be sure that your Node version is up-to-date. Older versions of Node did not support ES modules natively, and you might run into issues.
If you cannot upgrade Node, consider using a tool like https://github.com/nvm-sh/nvm/blob/master/README.md (Mac/Linux) or https://github.com/coreybutler/nvm-windows (Windows) so you can run multiple version of Node simultaneously.
JavaScript now has a yearly release cycle. Every release introduces new features that aim to make us (JavaScript developers) more productive, while making our code succinct, expressive, and (potentially) bug-free.
By the end of this workshop you will be intimately familiar with the newer set of JavaScript features, and ready to dive into your next JavaScript project with confidence.
This workshop allows you to explore many of the new features that we use on a daily basis, including
let
and const
Map
/ Set
data-structuresclass
syntax as well as static/private members and methodsThis workshop uses tests to verify and validate all of your changes, so you can be confident in your changes as you go about refactoring your code to use modern JavaScript language features.
Each section of the workshop builds on the previous one, allowing you to continuously refactor code and get tons of practice.
If you want to be fully caught up with all of the changes that have landed in JavaScript between 2015 and 2021, this is the workshop you want to attend!
Please follow the instructions in the README.md file of https://github.com/looselytyped/refactoring-to-modern-javascript
NOTE — If you get an error be sure that your Node version is up-to-date. Older versions of Node did not support ES modules natively, and you might run into issues.
If you cannot upgrade Node, consider using a tool like https://github.com/nvm-sh/nvm/blob/master/README.md (Mac/Linux) or https://github.com/coreybutler/nvm-windows (Windows) so you can run multiple version of Node simultaneously.
As developers we not only operate in different contexts, but also often have these different contexts interplay as part of our work.
Each of the tools that we use — version control systems like Git (along with collaborative tools like Github/Gitlab), IDE's like Eclipse/IntelliJ, build systems like Gradle, Ci/Cd tooling like Jenkins, IaaC tools like Ansible, the command line — all introduce context.
To be effective developers we need to know when to operate in a certain context, combine or tease apart how these contexts interplay.
Can you improve your release announcements if format your commit messages consistently? You bet!
How should your build tool interact with your version control system?
What does naming your files have to do with how you use your IDE?
This session will take a look at several of these contexts — it will attempt to discern between them, explore when you should separate them and when you attempt to bring them together.
With lots of examples, and lots of quizzes this session will definitely leave you thinking about a few things.
There's nothing new or exciting about relational databases. We abstract them away with ORMS, grudgingly write a query here or there, but generally try to forget about them entirely. Then the performance and scalability problems begin. “Shading, the secret ingredient to the web-scale sauce” often won't help us.
The database is at the heart of nearly every system we build. Reading data and writing data account for the majority of performance bottlenecks. When it comes to SQL and relational databases, the syntax is easy, but the concepts often aren't. The most important knowledge is not obvious but it is necessary to make the right design, query, and optimization decisions.
Indexing, a glimpse under the hood of the storage engine and the query optimizer, and some best practices are all you need to know bring your DB skills head and shoulders above your peers and ready to build bigger, better, faster apps.
This talk is designed to catapult your productivity, enhance your emotional intelligence, and refine your problem-solving skills. This talk is not just a series of presentations; it's a transformative experience tailored for the ambitious software developer and architect seeking to leave a mark in the fast-paced world of technology.
Dive into the essence of developer and architect productivity, where we unravel the secrets to optimizing your workflow and leveraging your skills for maximum impact. Discover the “24 Hours Instant Happiness” principle, a proven strategy to inject a dose of joy into your daily routine, fostering a positive work environment and personal life.
“Maximizing Your Impact” takes you deeper into the realm of influence, equipping you with the tools to excel in your projects and inspire those around you. Through “Effective Communication” and the intriguing “Mirror Technique,” learn how to build rapport, foster collaboration, and lead with empathy, amplifying your charisma in all professional interactions.
As we delve into the core of success, “Emotional Intelligence is 85% of Success” highlights the paramount importance of self-awareness, self-regulation, motivation, empathy, and social skills in achieving your goals. The “6 Phase Meditation Approach” and “Day Launcher” sessions are designed to refine your focus, creativity, and emotional stability, setting a solid foundation for a productive day ahead.
The inclusion of “Empathy Maps” and “IDEO Case Studies” offers a practical lens through which to view user-centric design and innovation. At the same time, the “SCAMPER Technique” provides a creative framework for problem-solving, ensuring you're equipped to tackle challenges with agility and inventiveness.
Elevate your productivity to new heights with “5 Choices for Super Productivity,” a comprehensive guide to prioritizing effectively, embracing extraordinary outcomes, and mastering your technology. Learn the art of “Managing Energy, Not Time,” a paradigm shift that promises to enhance your efficiency and job satisfaction.
As the talk culminates, “The Paradox of Choice” and the latest “Technology Trends to Focus On” prepare you to navigate the complexities of the modern tech landscape with confidence and curiosity.
This masterclass is more than just a talk; it's an invitation to transform how you work, lead, and innovate. Join us to unlock your full potential and reshape your future in software development and architecture. Whether you're looking to boost your productivity, enhance your emotional intelligence, or simply find more joy in your work, this talk is your gateway to a more fulfilling career and life.
Developers and Architects are designers, problem solvers, and innovative, creative artists. Software design is an art that requires both left and right brains to be active so you can understand what customers need. Next, we will explore habits and tools to plan, learn, research, organize, teach, develop, mentor, and architect.
Agenda
Enhancing Productivity and Personal Growth
– Developer and Architect Productivity
Strategies for improving daily workflow and efficiency in software development and architecture.
– 24 Hours Instant Happiness
Quick wins for boosting morale and happiness within the team and personal life.
– Maximizing Your Impact
Techniques to increase your influence and contributions in projects and teams.
– Effective Communication
Importance of clear communication and the Mirror Technique to improve understanding and rapport.
– Increasing Charisma
Tips for becoming more charismatic and influential in professional settings.
Building Emotional Intelligence and Mindfulness
– Emotional Intelligence is 85% of Success
Discussing the critical role of emotional intelligence in achieving professional success.
– 6-Phase Meditation Approach
Introducing a meditation technique to enhance focus, creativity, and emotional stability.
– Day Launcher
A strategy to start your day with intention and focus, setting the tone for productivity and success.
– Empathy Map
Utilizing empathy maps to better understand user needs and enhance team collaboration.
– IDEO Case Studies
Examining case studies from IDEO to illustrate successful applications of empathy in design.
– Understanding a Problem with SCAMPER Technique
Exploring the SCAMPER technique to creatively solve problems and innovate solutions.
Strategies for Super Productivity
– 5 Choices for Super Productivity
Detailed strategies for enhancing productivity by prioritizing important tasks, aiming for extraordinary outcomes, scheduling priorities (“big rocks”), mastering technology use, and maintaining energy levels.
– Managing Energy, Not Time
Shifting focus from time management to energy management to maximize productivity and well-being.
– Increasing Frequency to Do What You Want
Techniques to align daily actions with personal and professional goals more effectively.
– The Paradox of Choice
Understanding how reducing options can lead to increased satisfaction and productivity.
– Technology Trends to Focus On
Highlighting current technology trends that developers and architects should be aware of to stay ahead in their field.
Seven steps methodology for designing superior AI-Enabled Microservices
In most of the Monolithic applications business objects and data, model designs are already present. As part of moving to cloud, companies miss out of the use-cases the REST APIs need to support. Implementing cloud solutions on top of existing models may lead to performance issues with APIs and cause scalability issues. Clients need to rewrite code due to a new version of APIs.
In most of the Monolithic applications business objects and data, model designs are already present. As part of moving to cloud, companies miss out of the use-cases the REST APIs need to support. Implementing cloud solutions on top of existing models may lead to performance issues with APIs and cause scalability issues. Clients need to rewrite code due to a new version of APIs.
In this talk, we will explore ten steps methodology for designing superior Cloud Native RESTFul Microservices APIs. Firstly, define the business domain objects and how they relate to use cases. If the use-case is to support <500 ms response time and availability of 99.99%, design the application for Consistency, Availability, and Partition tolerant. Next, Create an ideal design which solves the use-cases, refer to the industry standard JSONs and designs from schema.org, iana.org, and microformats.org. Later, find the fail points in the process and go back to the first step to resolving the pain points: Go back to Define the problem. Question to ask is what can go wrong? When can it go wrong? Next, create a Facade pattern to connect to either the existing Monolithic App or create a new App to support the new cloud use-cases. Create API Gateway, so other companies can build software and create more offerings. Next, design common Layers for error handling, logs, and security. For API security, perform Threat Modeling to find security vulnerabilities and plan for mitigation of risks. Use generic authentication using SAML, OAuth, and JWT to support Authentication and Authorization. Next, create an API Versioning strategy so that the REST API can evolve with minimal client changes. Apply Cloud Native design patterns for Resiliency. In the end, test APIs using contract driven testing and PACT files.
This talk is ideal for the following roles:
Architects
Technical Leads
Programers
Integration Architects
Solution Architects
Seven steps methodology for designing superior AI-Enabled Microservices
In most of the Monolithic applications business objects and data, model designs are already present. As part of moving to cloud, companies miss out of the use-cases the REST APIs need to support. Implementing cloud solutions on top of existing models may lead to performance issues with APIs and cause scalability issues. Clients need to rewrite code due to a new version of APIs.
In most of the Monolithic applications business objects and data, model designs are already present. As part of moving to cloud, companies miss out of the use-cases the REST APIs need to support. Implementing cloud solutions on top of existing models may lead to performance issues with APIs and cause scalability issues. Clients need to rewrite code due to a new version of APIs.
In this talk, we will explore ten steps methodology for designing superior Cloud Native RESTFul Microservices APIs. Firstly, define the business domain objects and how they relate to use cases. If the use-case is to support <500 ms response time and availability of 99.99%, design the application for Consistency, Availability, and Partition tolerant. Next, Create an ideal design which solves the use-cases, refer to the industry standard JSONs and designs from schema.org, iana.org, and microformats.org. Later, find the fail points in the process and go back to the first step to resolving the pain points: Go back to Define the problem. Question to ask is what can go wrong? When can it go wrong? Next, create a Facade pattern to connect to either the existing Monolithic App or create a new App to support the new cloud use-cases. Create API Gateway, so other companies can build software and create more offerings. Next, design common Layers for error handling, logs, and security. For API security, perform Threat Modeling to find security vulnerabilities and plan for mitigation of risks. Use generic authentication using SAML, OAuth, and JWT to support Authentication and Authorization. Next, create an API Versioning strategy so that the REST API can evolve with minimal client changes. Apply Cloud Native design patterns for Resiliency. In the end, test APIs using contract driven testing and PACT files.
This talk is ideal for the following roles:
Architects
Technical Leads
Programers
Integration Architects
Solution Architects
We are all proud of being the best coders, but have we analyzed our code for optimization? What tools can help us guide toward creating algorithms with better time and space complexity?
How you approach a problem is critical. How do you solve the problem? How do you optimize the problem?
This talk will not only prepare you for the interviews but will also give confidence on how to crack the coding challenges.
In this talk, we will be exploring analytical skills and coding skills of the top algorithms and common problems from the real-world, including
Single Source Shortest Paths
Traveling Salesman problems
Pattern Search Algorithms
Greedy Algorithms
KnapSack problem
Priority queue
Problem-solving skills are valuable and help develop optimal algorithms. We will look at problem-solving flow charts to make the right decisions for a given problem.
Topics we will cover are:
Arrays and Strings - Palindrome permutations
Linked Lists - Runner technique, Recursions
Stacks and queues
Trees and Graphs - Binary Tree, Binary Search Tree, Tries
Sorting and Searching
Single-Source Shortest Paths: Bellman-Ford, BFS, DFS, Dijkstra, Dynamic Programming
This talk is ideal for the following roles:
Architects
Technical Leads
Programers
Integration Architects
Solution Architects
We are all proud of being the best coders, but have we analyzed our code for optimization? What tools can help us guide toward creating algorithms with better time and space complexity?
How you approach a problem is critical. How do you solve the problem? How do you optimize the problem?
This talk will not only prepare you for the interviews but will also give confidence on how to crack the coding challenges.
In this talk, we will be exploring analytical skills and coding skills of the top algorithms and common problems from the real-world, including
Single Source Shortest Paths
Traveling Salesman problems
Pattern Search Algorithms
Greedy Algorithms
KnapSack problem
Priority queue
Problem-solving skills are valuable and help develop optimal algorithms. We will look at problem-solving flow charts to make the right decisions for a given problem.
Topics we will cover are:
Arrays and Strings - Palindrome permutations
Linked Lists - Runner technique, Recursions
Stacks and queues
Trees and Graphs - Binary Tree, Binary Search Tree, Tries
Sorting and Searching
Single-Source Shortest Paths: Bellman-Ford, BFS, DFS, Dijkstra, Dynamic Programming
This talk is ideal for the following roles:
Architects
Technical Leads
Programers
Integration Architects
Solution Architects
As a software architect, you're at the forefront of building scalable, secure, and resilient systems that drive innovation while safeguarding critical digital assets. This workshop is designed to equip you with actionable strategies, cutting-edge tools, and deep technical insights into embedding security into every phase of the software development lifecycle.
In this immersive, hands-on session, we will explore how to elevate your DevSecOps practices to meet the challenges of today’s evolving threat landscape while ensuring productivity and operational excellence.
What You'll Learn:
Why You Should Attend:
Who Should Attend:
This workshop is ideal for:
Join us for this transformative session to gain the skills and knowledge necessary to design secure, scalable, and resilient systems that protect your organization and enable innovation.
This session is a must-attend for architects aiming to design secure, scalable systems while staying ahead in the rapidly evolving security landscape.
A Docker image is the artifact of the container world. Leaner images allow easier for quicker build times, less resource management (disk pressure and network usage), fewer attack vectors, and better performance when pulling or pushing images for storage or upon deployment. Lean images also produce smaller containers, which in turn require fewer resources at runtime, allowing for higher server density. Multistage Dockerfiles can help reduce the complexity of CI/CD pipelines by reducing the number of moving parts in building, testing, and producing a production-grade image. The key to building leaner (smaller) images, with little build-time overhead is to understand how Docker uses the Union File System (UFS), how Docker builds (and when it busts) the cache, and how to use the Dockerfile specification to it's fullest potential.
In this exercises driven, hands-on workshop, we will dive deep, peeking under the hood to get a glimpse of the Union File System, and then proceed to look at the effects of many of the important Dockerfile instructions. We will see how best to use them, and highlight any caveats that we should be aware of.
By the end of this class you will have gained a keen understanding of how best to write your Dockerfiles, and effectively build and design lean images, and containers.
A Docker image is the artifact of the container world. Leaner images allow easier for quicker build times, less resource management (disk pressure and network usage), fewer attack vectors, and better performance when pulling or pushing images for storage or upon deployment. Lean images also produce smaller containers, which in turn require fewer resources at runtime, allowing for higher server density. Multistage Dockerfiles can help reduce the complexity of CI/CD pipelines by reducing the number of moving parts in building, testing, and producing a production-grade image. The key to building leaner (smaller) images, with little build-time overhead is to understand how Docker uses the Union File System (UFS), how Docker builds (and when it busts) the cache, and how to use the Dockerfile specification to it's fullest potential.
In this exercises driven, hands-on workshop, we will dive deep, peeking under the hood to get a glimpse of the Union File System, and then proceed to look at the effects of many of the important Dockerfile instructions. We will see how best to use them, and highlight any caveats that we should be aware of.
By the end of this class you will have gained a keen understanding of how best to write your Dockerfiles, and effectively build and design lean images, and containers.
Jamie Zawinski once said “Some people, when confronted with a problem, think “I know, I'll use regular expressions.” Now they have two problems.“. Many consider regular expressions to be indecipherable, but the truth is that every programmer should consider regular expressions an integral part of their toolkit. From the command line to your favorite text editor, from parsing user input to scraping HTML pages - once you know regular expressions you will find a use for them in almost every programming context.
In this session we will attempt to unriddle the mystery that regular expressions pose. We will start at the basics and work our way towards more complex expressions.