How was 2018 for the SaaS market?

By | Blog | No Comments

You might not know it but almost every business relies on SaaS (Software as a Service) to operate. According to Business Wire, the SaaS market is expected to grow at an annual rate of 21.2%. 2018 was a milestone year for SaaS as it saw a couple SaaS companies go public. The cloud market drives the overall SaaS market because cloud makes SaaS a reality. That is why this article includes cloud market statistics as well as SaaS statistics.
Cloud Driving the Bottom Line
Recently, Gartner forecasted a 17.3 % growth in the cloud market. Cloud system infrastructure services (the fastest growing segment of Infrastructure as a Service) is expected to grow by 27.6% in 2019. With this in mind, it is surprising to find that Amazon is not primarily an e-commerce platform anymore. Amazon has shifted it’s focus in recent years to it’s B2B service called Amazon Web Services (AWS). The total revenue contribution of AWS in the companies balance sheet amounted to $6.1 billion in 2018.
While AWS has a 41.5% market share in the public cloud, competitor, Microsoft Azure, is catching up. In their latest earnings report, Microsoft reported that Azure grew at 89% over the 2018 year. This is a growth rate almost double that of AWS.

The shift from On-Prem
Microsoft Azure currently shares 29.4 % of the market whilst Google has a minor 3%. Other players such as IBM and Rackspace make up the remaining 25% of the market.
Despite Azure’s growth, Amazon is still the preferred cloud platform with 80% of enterprises that are running or experimenting with AWS preferring it. Both Microsoft and Amazon had increased adoption rates, proving that enterprises and businesses are gradually shifting their data to the cloud. Possibly due to CIO’s and business leaders understanding the transformational aspect cloud computing has on their business.
Some examples of enterprises moving to the cloud include:
– Capital One (an American Bank) hosts their mobile app on AWS
– GE Oil & Gas is migrating most of it’s computing and storage capacity to the public cloud to reduce risk and optimize cost
– Maersk is migrating legacy systems to the cloud to optimize processes whilst enabling business intelligence and AI to streamline it’s operations.
The biggest growth in SaaS yet
Microsoft leads the SaaS market with a 17% market share and an annual growth of 45%. The total enterprise SaaS market is presently generating $20 billion in quarterly revenue.
According to Synergy Research Group, Salesforce has majority market share when it comes to CRM. However the CRM segment is growling relatively slowly in comparison to other segments.

If we look at SaaS based on industry verticals, majority of cloud adoption lives in the financial services industry with an adoption rate 19%, this is commendable when compared to other verticals such as insurance and healthcare. However, enterprise cloud adoption remains low at 20%.

Using Data Virtualsation to Simplify Machine Learning

By | Blog | No Comments

Having all your data in one place doesn’t necessarily make finding things easy, in fact, most of the time it’s like finding a needle in a haystack.
People often call data the oil of the technology age. It’s a very valuable commodity that drives organisations everywhere. The volume and variety of data that flows through organizations today is so vast that data lakes are now one of the principal data management architecture. According to Forbes “A data lake holds data in an unstructured way and there is no hierarchy or organization among the individual pieces of data. It holds data in its rawest form—it’s not processed or analyzed.” This, interestingly, is supposed to make data easier to find and reduce time spent by data scientists on selection and integration. An added benefit is that data lakes provide massive computing power, thus allowing data to be transformed to meet the needs of processes that require it.
A recent study proved that organisations that applied data lakes outperformed their peers by up to 8%. However, most businesses struggle when it comes to applying machine learning to these data lakes to gain insight from the data. In fact, majority of data scientists spend 80% of their time on this task, it’s time for a change.
Despite what one would think, having your data all in one physical place does not make finding it easier. Storing data in its raw form requires it to be adapted for machine learning, and that burden falls on data scientists. The past few years have brought out tools that help these scientists with integration but there still remains tasks that require a more advanced skillset.
To address these issues, data virtualization is needed.
Primarily, data virtualization allows data scientists to access more data in the format that they prefer. It provides one single access point to any data, regardless of its location or format. This applies different logical views of the same physical data without the need for replication. In doing so, data virtualization offers fast and inexpensive ways of using the data to meet the needs of different users across and organization.
Data virtualization doesn’t require data to be replicated (with just data lakes in a business’s architecture, you do require data replication) so new data can be added more quickly. The best data virtualization tools will also allow a searchable catalog of all available data sets including extensive metadata.
By employing DV, IT data architects can create ‘reusable logical data sets’ that expose information in ways useful for different specific purposes. Data scientists can then adapt these reusable data sets to meet the individual needs of different Machine Learning processes and, by allowing them to take care of complex issues such as transformation and performance optimisation, data scientists can then perform any final, and more straightforward, customisations that might be required.

Is Data Virtualisation the key to Machine Learning?

By | Blog | No Comments

It used to be said that technology was the lifeblood of the organization, which evolved to connectivity and now it is data – regardless of industry or size. Thanks to the evolution of technology, everything is captured, and we now have a multitude of data. If this data is used correctly, it has the possibility of improving any business greatly.

This all makes data increasingly valuable and as a result, data lakes (repositories that allow organisations to store all their structured and unstructured data) have become popular when working on a businesses data management architecture. By storing all data in data lakes, organisations can easily access their data and save business time and money. These lakes also allow businesses to have access to a range of business insights that allow them to make well-informed business decisions. Using machine learning on this data allows business to forecast outcomes and achieve the best results.

Despite all these benefits, businesses are still struggling when it comes to integration as well as data discovery. Storing data in its original format does not take away the need to adapt it for machine learning. Having all your data in one physical place is like trying to find a needle in a haystack. On top this, many organisations use various data storage solutions such as on-premise servers, the cloud and data centres so it is more like trying to find a needle in several hay stacks.

Fortunately some tools have come into the market to assist in integrating all this data, however, more complex tasks need a complex skill set and that’s where data virtualization comes into play. Data virtualization provides one access point to access any data – regardless of format. If implemented correctly it can stitch together various bits of data from multiple sources in real-time. It removes the need for data to be replicated into one location in order for a business to read and gather insight.

As machine learning and big data continue to grow and support modern business decisions, data virtualization is enabling businesses to seamlessly represent their data.