We Accept Most Insurances

Company Logo

We Accept Most Insurances

Exciting Times

blog-2

 

Exciting Times…

 

We are living in an interesting time where we are seeing technology innovations happening at much faster pace than ever. This is driven from the ever growing requirement of doing things faster, with extreme data volume, simplicity and at lower cost!
Many software-driven solutions have sprung up in recent years to leverage commodity hardware to provide very cost effective and easy to use infrastructure for running various workloads. While classical workloads (based on Databases – OLTP, OLAP, Exchange etc.) still drive the enterprise data center, newer workloads (based on Object stores, NoSQL etc.) have seen rapid adoption. Emerging scalable Analytics solutions are providing deeper insights, thereby enabling better decisions from rapidly/massively growing data (BigData). HyperVisors from various vendors have dramatically simplified the management of variety of workloads and have maximized the utilization of hardware. Public cloud vendors (Amazon, Azure etc.) and private/converged cloud vendors (VCE, Nutanix etc.) have rolled out tightly integrated hypervisors, management apps with scalability software IPs on off-the-shelf hardware to deliver infrastructure where workloads can be deployed and run with few clicks thereby greatly simplifying the job of data center admins. The Software Defined Data Center is no longer just a buzz word, it is happening now. Users are shifting from building their own infrastructure – by independently buying server, switch, storage, software – to either public or private clouds where resources are already integrated and ready to use!

 

These changes create exciting times for everyone in the data center!

 

But New Disruptive Innovations Are Happening in Hardware…

 

While this first wave of software-led innovation on commodity hardware of ‘today’, is continuing and maturing – a fundamental shift has begun to happen in underlying hardware technologies. These new hardware technologies are quite disruptive. As they transition from being mere ideas, to real products – another wave of software innovation is inevitable. These new hardware are not only showing the early signs of enormous benefits for the applications of today, but are also uncovering newer use cases. There is a great deal of excitement around the arrival of persistent memory (Xpoint etc), low latency interconnect product/solutions (RoCE, etc.), low overhead container technologies and recognition of new roles for FPGA/GPU. All these technologies are moving towards the same goal of accelerating workloads in a cost effective way.

 

So… What Does It Mean To Software And What Solution Opportunities They Present?

 

As most of these hardware components are making their way into the eco-system, they are also showing the need for the software stack to evolve. The software stack needs to adapt to consume one or  ‘combination’ of these new components, in meaningful way, for dramatic improvement of workloads.

 

Let’s take a look at an example of two of these hardware innovation and potential gaps in today’s software stack preventing their full exploitation. The new ‘persistent memory’ and ‘low latency network interconnect’ technologies are promising that building a rack with following ingredients will be possible in near future:

 

  • Large persistent Memory (for storage) with ‘1µsec’ latency

  • Network interconnect with ‘1µsec’ latency

 

That’s an order of magnitude better than the combined latencies (100s µsec) that exist today for equivalent components within a rack. So, imagine the impact when access to persistent data both – ‘within’ and ‘across’ compute nodes can be super-efficient. It is very disruptive! These have the potential to help accelerate many of today’s workloads (5X/10X/20X acceleration?) irrespective of whether they are single threaded (1 queue depth) or multi-threaded (with multi queue depth). That means a rack built with these capabilities can run many more workloads (and faster) than can be done today in equivalent footprint. This has significant implication on business agility, power savings, real estate etc. But that’s not all. The new storage access models (persistent memory and low latency network interconnect) also promise to dramatically improve/simplify programming of quite a few applications. These innovations will have a larger impact on workloads than all-flash arrays had when they arrived in the data center!

 

However, the software stack of today is not ready to truly leverage the benefits offered by these upcoming disruptive hardware. The overhead of the current system software stack (in IO path and data services path) masks benefits these technologies offer. A research paper from Georgia Institute of Technology (Systems and Applications for Persistent Memory), notes:

 

“…Research has shown that, as storage becomes faster, software overheads tend to become the most dominant source of wasted effort, therefore necessitating rethinking of the software stacks [105]. As discussed earlier, traditional storage stacks assume that storage is in a different address space, and operate on a block device abstraction. They implement intermediate layers such as page cache to stage the data. When using PM (persistent memory), such a layered design results in unnecessary copies and translations in the software. It is possible to eliminate these overheads by completely avoiding the page cache and the block layer abstraction. Providing low overhead (but managed) access to PM is critical to ensure that applications harness the full potential of PM… ”

 

Given these hardware components are coming and will become ‘commodity’ at some point, solving the Software stack (especially the IO path and data services path) problems of today, are a significant opportunity. Furthermore, because these components and software are not available in ‘usable overall product’ form, innovation to provide these capabilities in an integrated product is a tremendous opportunity. Someone needs to take a step back and build a solution which can glue together these discrete but related pieces of innovation in a usable ‘finished product’ form. Essentially build a user consumable end-product – which integrates these new components with innovative changes in software stack!

Well… Quite a Few Research and Efforts Are Already in Works…

 

Several open source initiatives are in play and many companies are collaborating together to standardize interfaces and show results on benefits for various workloads. Many possible solutions and workload transitions are being discussed.

 

Persistent Memory Programming Model

  • http://pmem.io
  • “For many years computer applications organize their data between two tiers: memory and storage. We believe the emerging persistent memory technologies introduce a third tier. Persistent memory (or pmem for short) is accessed like volatile memory, using processor load and store instructions, but it retains its contents across power loss like storage.”

 

Georgia Institute of Technology

  • SYSTEMS AND APPLICATIONS FOR PERSISTENT MEMORY
  • “Emerging non-volatile (or persistent) memories bridge the performance and capacity gap between memory and storage, thereby introducing a new tier. To harness the full potential of future hybrid memory systems coupling DRAM with PM, we must build new system software and application mechanisms that enable the optimal use of PM as both fast storage and scalable low cost (but slower) memory“

 

SNIA

  • http://www.snia.org/forums/sssi/nvmp
  • “A new programming model for persistent memory (PM) – NVM hardware designed to be treated by software similarly to system memory”
  • http://www.snia.org/sites/default/files/NVM/2016/presentations/RickCoulson_All_the_Ways_3D_XPoint_Impacts.pdf

 

So… What Products/Solutions and Markets We are Talking About?

 

Momentum is building and recognition is growing about the existence and potential of these innovations as they make their way into the market and the expectation is that they will be the ‘commodity’ hardware in

Creative Management

blog-3

 

Creative Management

 

Great leaders have many skills, but out of these, there is one skill that will that sets the tone of their success or failure. There is one big thing to remember; that not all past ways may work from one company to the next. No matter what, a different company will exhibit technological difference that require much different development, operational, GTM, sales structures, etc. that meet the needs of the respective company. Time-to-market is of essence, but if the management team is merely putting in what may have worked in their past company(s) into their new company, then this is disaster waiting to happen, this has never shown success in any case study, and I can show and give plenty of examples. Great leaders should have many ways to adept to evolving changes in today’s high-technology companies. This is creative management vs ‘hand-me-down’ management. It’s a skill that’s earned with experience that has proven to be successful, time and again.

 

The management team should always on the same page to understand the essentials for execution of not only their organizational responsibilities, but to help others in different areas that may need their support. These areas could include such things as product and roadmap feasibility, market TAM’s, competitive environment, engineering execution (ie product release processes), operational models (ie manufacturability, inventory management, etc.), GTM overlay into sales models, support and services models, etc. All these areas are the ‘foundation of the company’ and they need to be established properly to execute on all fronts. Think of them as all connecting together, like Lego’s to build a great output or something that people are proud of. They must all have to click together and, if not, then there are breakdowns, and goals/results will not be met. It’s critical that every management team member is aligned.

 

Building the company’s foundation is the crux of every company’s problem and takes the longest to fix. The very first thing that breaks companies, is the management team chemistry. It’s absolutely critical that the management team is working together on all cylinders because not only are there dependencies with each other’s organization that are keys to the company’s success, but it’s critical that every departmental leaders understands the elements of all the other departments as well. If they’re great executive leaders then they better know their own respective department and charter, inside and out. However, if the marketing leader fully understands the charter of the engineering leader, and vice-versa, then I guarantee you that the output of both respective leaders will be even higher. This is very easy to see and the employees just feed off of this, positively. It’s the exact opposite, of ‘I’ll play my part and use play yours’. Management team members need each other to win, just like any team sport; if all the players on the team are working together in harmony then an average team can beat better teams that have ‘me-only’ players.

 

Simply put, this is creative management and it’s the job of the management team to work together, leveraging old ideas with new ideas, thoughts and mindshare to do what will work for this, particular company in order to help achieve company success. It’s not just ‘one’ individual. It’s absolutely key to make sure these foundational areas are all working together like Lego’s are linked together to achieve their goals, before any movement in the market would be expected. Without this, then management is setting false expectations to employees, board members, shareholders and other key stakeholders. It’ll turn out to be just all talk, with no positive results. Building the foundation is the key of any successful company, and it all really starts with having the management team in unison from the beginning.

Learn more by talking with key executives at Linear Growth Consulting, LLC

Universal Dental