#NeotysPAC – 3 Top Challenges Facing Every Organization Today, by Todd DeCapua

[By Todd DeCapua]

In my discussion at the Performance Advisory Council, I performed a deep dive session covering the top 3 challenges facing organizations today, reviewing solution options and practical approaches that work, enabling us to apply them now and into the future.

My position: “I am sick of the hype…let’s Get practical and get sh*t done!”

For each of the 3 Top Challenges, I led the discussion, using slides and conversation along the topics of Introduction, Deep Dive, and Future and How-Tos. This turned out to be a great format, and one that encourages an active discussion across the council, and perspectives we spoke about more as our time together continued.

Challenge #1: Big Data/Machine Learning/Artificial Intelligence

For this blog topic, I thought it would be interesting to share where some of the most active dialogs happened, that being the “Deep Dive” in contemplating many of the key learnings we have faced.

  • Do we know all the data points?

When starting and even delivering, are you sure you know all the elements of data you are going to need and will new or changing inputs change the outcomes? This has become a very interesting challenge, especially in looking at very large and unstructured streaming data sets. Not to be taken lightly, and a challenge many of us on the council have faced, while others recognizing it is something they will be challenged with soon.

  • Have all the algorithms been defined?

Do you ever know if you have defined or ‘tuned’ all the algorithms? This is an important question, and I would suggest fact, which you need to come to grips with in your journey. Of course, as you get deep into the world of algorithms and what they and their results represent, some will also be challenges to think introspectively…” is there a way you could automate algorithms to improve the algorithms?” Continue to challenge the hypothesis, and enable the machines to continuously learn and apply, so to continuously prove and optimize your findings.

  • Where are all the ‘processing engines’?

As you think through where all of this compute, or multiple compute areas within your stack, where is this or are these happening? For example, will you plan to have much of the compute done within the edge, or do you have it closer to the core and why? As you continue to work through this question and why think of resiliency, and what happens if the processing fails and the network between edge and core become poor or disconnected…then what happens, and does it matter? Are there other power, heat, elements or other outside environmental forces which also need to be taken into consideration when making this seemingly simple decision or influence?

  • Are there unique protocols?

Looking across much of the data of today, sensors often provide a lot of information, these communication and transport protocols continue to evolve at a very rapid pace. Thinking of how your framework for managing and growing with this changing core element is a consideration not only for ingestion but, also for processing and results whatever form that may take. Are there known protocols today, which will limit processing given scale and complexity within your environment or other consumers of your results?

  • What are the volumes/size of data?

Knowing we are speaking of “Big Data” and the accompanying Machine Learning and Artificial Intelligence…we need to consider and build forecast models for many factors, including data growth, storage, processing power, network traffic and conditions, and much more. Often these considerations are an afterthought…and the massive and accelerated growth of data and storage, becomes a quickly crippling task, as the desire for more becomes exponential…often not by human intervention, but that of the machine.

  • How do we create, reset, and refresh?

Well, this is one from a lot of experience, and trial & error. When thinking of the BD, ML, and AI; how will you be able to create these data sets and environments, and do you have reset and refresh strategies in place, so to be able to maintain the development and deploy cycles required. Many times, the rule of seven comes into play knowing if you have one copy, you will end up with seven as you perpetuate and accelerate through the development cycle so, how much disk will you require and how can you quickly (minutes) reset and refresh your environment(s)?

Challenge #2: Blockchain

For this challenge, I wanted to share some of the “Future and How-To’s” insights we discussed. You will find content I used to guide the discussion from both Gartner and also Mattias Scherer. For Gartner, they have published Top 10 Strategic Technology Trends for 2018 and quoting the document stated, “a Blockchain is a powerful tool for digital business because of its ability to: Remove business and technology friction; Enable native asset creation and distribution; Provide a managed trust model.” The below illustration also provides a use case “The Blockchain Ledger” on the left with some “Extended Characteristics” to the right, showing some common platforms on the bottom.

The below illustration “Figure 2.4.1” is representative of an example transaction flow involving 4 actors total: Client, Endorser A, Endorser B and Committer C; flowing through 6 steps represented.

This conceptually highlights three elements I highlighted during our discussion and have learned is important to use to shape this dialog.

1. Define the Business Case

Before making a significant investment of time, resources and money; it is important to define your business case. This may seem like a simple point but, it is interesting to observe how many individuals and organizations have seemingly jumped in and just ‘got started’ without a vision for direction or potential end state or why. How will this capability enable you to differentiate and accelerate your business for your customers? Let us find a quick and easy opportunity to pilot and recognize value, or not, quickly.

2. Outline Workflows

Taking the next step is to start understanding the relationships, showing the key handoffs which nearly eliminate the delays and errors, accelerating and growing the value to your end users….all while doing it faster and cheaper. What will you discover, could be risks and opportunities, remember to focus on your customers and supply chain?

3. Public vs. Permissioned Blockchain

Another big topic of discussion was around organizations and shall they be using public or permission blockchain environments. Does this impact the performance of the blockchain? How are you testing the performance across blockchain today? If public and line of sight to all of your supply chain, are there conditions that apply, which would be different within a permission blockchain? Perhaps there is a competitive advantage if you set up and permissions blockchain where you own the intellectual property and domain expertise, so to be first to market and deliver the complete automated capability through the supply chain and servicing the customer. What would that mean for you and your organization?

The last topic I covered within the challenge is research provided by Mattias Scherer titled, “Performance and Scalability of Blockchain Networks and Smart Contracts.” This provides a 46 pages thesis published in 2017, which covers in great detail some performance and security related to Blockchain.

Cited: Scherer, M. (2017). Performance and Scalability of Blockchain Networks and Smart Contracts (Dissertation). Retrieved from http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-136470

Below is the Abstract published by Mattias for his Independent Thesis.
The blockchain technology started as the innovation that powered the cryptocurrency Bitcoin. But in recent years, leaders in finance, banking, and many more companies have given this innovation more attention than ever before. They seek a new technology to replace their system which is often inefficient and costly to operate. However, one of the reasons why it not possible to use a blockchain right away is because of the poor performance. Public blockchains, where anyone can participate, can only process a couple of transaction per second and is therefore far from usable in the world of finance. Permission blockchains is another type of blockchain where only a restricted set of users have the rights to decide what will be recorded in the blockchain. This allows permissions blockchains to have some advantages over public blockchains. Most notably is the ability to split the network into segments where only a subset of nodes needs to validate transactions to a particular application, allowing the use of parallel computing and better scaling. Moreover, the validating nodes can be trusted, allowing the use of consensus algorithm which offer much more throughput.

In this paper, we compare public blockchain with permission blockchain and address the notable trade-offs: decentralization, scalability, and security, in the different blockchain networks. Furthermore, we examine the potential of using a permission blockchain to replace the old systems used in financial institutes and banks by launching a Hyperledger Fabric network and run stress tests.

It is apparent that with less decentralization, the performance and scalability of Hyperledger Fabric network are improved, and it is feasible that permission blockchain can be used in finance. 

Challenge #3: Internet of Things [IoT]

There are a lot of great discussions that started with my IoT dialogue. For this blog, I felt it would be good to start at the beginning, with the “Introduction.” I always find it interesting with this and many topics, how many people skip over the definition, and immediately get into providing their point of view…only to learn later the slight variance in perspective…which the results would have benefitted greatly from simply defining the topic/perspective first.

So, the definition I shared was, “IoT refers to the connection of devices (other than typical fares such as computers and smartphones) to the Internet. Cars, kitchen appliances, and even heart monitors can all be connected through the IoT. And as the Internet of Things grows in the next few years, more devices will join that list.”

The below illustration provided great discussion points, as we went around the room, and spoke about several examples which many of us have worked on several of these scenarios, and shared some of the challenges and learnings we realized. One of the realizations we all shared was how ‘futuristic’ this all seemed only a few years ago, and now looking around, much of these are commonly implemented and instrumented in our everyday world. What many agreed has not evolved as quickly are the practices that are used to ensure everything works as designed, not only under common scenarios but also those extreme conditions or exceptions, which often are use cases that should have been recognized and products designed to perform well under those conditions.

To learn more about these topics, the discussions we had around each, and pick up a few items you can take away now to apply to the 3 top challenges you are facing I wanted to provide the full recording to you. Please let me know what you learn and share it with others. Looking forward to the opportunity of connecting with you soon.

Todd DeCapua

Todd DeCapua is a technology evangelist, passionate software executive, and business leader. Some of his roles/titles include Senior Director of Technology and Product Innovation at CSC, Chief Technology Evangelist at Hewlett-Packard Enterprise, Co-Founder of TechBeacon.com, VP of Innovation and Strategy Board of Director at Vivit Worldwide, Independent Board of Director at iHireTech, and Independent Board of Director at Apposite Technologies. Is also a very active online author and contributor and co-author of the O’Reilly published book titled, “Effective Performance Engineering.”

Learn More about Performance

Do you want to know more about this event? See Todd DeCapua’s presentation here.

Leave a Reply

Your email address will not be published. Required fields are marked *