How cloud applications are transforming IT

By | Uncategorised | No Comments

Since the early 2000s software has evolved rapidly and these non-stop changes have greatly upset the balance of power in computing.

For something like a Content Management System (CMS) to be cloud-native, the entire system must exist in the cloud. It needs to be developed, tested, deployed, debugged and updated on the cloud. The system would not be installed on an on-premise server for permanent residency nor is it converted to a virtual machine image to make it available across servers. Systems like these are designed for the cloud, which requires fundamental changes to a business’s architecture and the IT economy that supports it.

A cloud-native application is made for the systems that host it, rather than having to be converted or staged in a virtual environment that hides the nature of the cloud from it. Since the beginning of computing, software has been designed for the machines destined to run it. Dartmouth’s John Kemeny and Thomas Kurtz essentially invented modern computing by devising a language meant to withstand trial-and-error programming: BASIC. The principle of BASIC is that software can make the best use of the machine it runs on and should be nurtured and developed inside said machines rather than compiled separately. Cloud-native computing uses the same principle, extended to include cloud platforms.

Since the start of software developers and high-level programming, software became less reliant on the hardware it needed to be designed for. Hardware is now designing itself for software and we can’t go back.

“The cloud” (which is way too late to rename) is a machine, notwithstanding one that spans the planet. A cloud may be any combination of resources, located anywhere on Earth, whose network connectivity enables them to function in concert as a single assembly of servers. A business could own its cloud in its entirety, or rely on the likes of Microsoft, Amazon, and Google to have a cloud-native environment, or use both it’s own and cloud suppliers “cloud”. So when we say an application is “native” to this type of cloud, what we mean is not only that it was constructed for deployment there, but that it is portable throughout any part of the space that this cloud encompasses.

A cloud-native application is designed for the cloud platform it is intended to run on. Its life is in this cloud platform. It changes the computing landscape for 2 reasons:

  • “Version” means something different than it did 10 years ago – anyone who knows Windows understands this. There probably won’t be a Windows 10 – but there was a Windows XP, Vista, 7, 8 and 8.1. all before 10. A true cloud-native application will evolve as smartphones do – you didn’t need to pay to update your Android from Oreo to Pie.
  • The is no clear reason as to why any application needs to be installed on a PC – except in instances of no connectivity

Soon the very phrase “cloud-native” may fall into disuse, like the tag on the 1990s and early 2000s TV shows that read, “Filmed in high definition!”

How the Cloud Helps Your Business Save Money

By | Blog | No Comments

Embrace Innovation

Innovating any kind of business isn’t about just adopting the latest technology, it is how you use the technology to streamline processes and create cost benefits that make your business innovative.

The cloud has introduced convenience, accessibility and easier management to businesses everywhere. For a business with a strong on the road staff compliment, the cloud has enabled staff, from any web-connected device to log-in and access mission-critical applications.
Mobility is just one advantage of the cloud; shifting resources to the cloud can bring increasing cost savings.

Increasing productivity does mean decreasing costs

Modern cloud software is built for any size business – from inventory forecasting to selling and ordering to accounts. Today’s technology can and probably do already handle the crucial business functions in your organization. Utilizing cloud-based technologies will enable more powerful technology and bring in cost benefits.
Cloud technologies provide secure and on-demand access to real-time data. Many businesses have realized that migrating to cloud technology has decreased costs with the burden of maintaining systems fading away.

Often when businesses move from on-premises servers to the cloud they not only reduce physical space, but also electricity and maintenance fees. Physical Servers require hardware, and hardware frequently becomes outdated which means additional costs of replacing said hardware. In fact, migrating to the cloud can result in a 16% reduction in operational costs.

CapEx vs OpEx

CapEx refers to capital expenditure which includes fixed assets such as physical servers and hardware. OpEx refers to operational expenditure which includes day-day incurred expenses in order to make the business operational. The trend is that businesses are moving to an OpEx model as it just makes business sense. Capital Expenditure tends to decrease in value as servers and hardware become outdated and lose value over time. OpEx is usually tax deductible as it is considered a short term cost. Cloud Applications follow an OpEx model as they usually follow a subscription-based cost model.

What’s more, CapEx spending often requires a major up-front investment. Consider the costs involved in setting up on-premises server farms, software licenses, and buying industrial equipment outright. With cloud software and a simple subscription fee, these costs can be minimized and streamlined—money that can be reinvested back into the business.

How was 2018 for the SaaS market?

By | Blog | No Comments

You might not know it but almost every business relies on SaaS (Software as a Service) to operate. According to Business Wire, the SaaS market is expected to grow at an annual rate of 21.2%. 2018 was a milestone year for SaaS as it saw a couple of SaaS companies go public. The cloud market drives the overall SaaS market because the cloud makes SaaS a reality. That is why this article includes cloud market statistics as well as SaaS statistics.

Cloud Driving the Bottom Line

Recently, Gartner forecasted a 17.3 % growth in the cloud market. Cloud system infrastructure services (the fastest growing segment of Infrastructure as a Service) is expected to grow by 27.6% in 2019. With this in mind, it is surprising to find that Amazon is not primarily an e-commerce platform anymore. Amazon has shifted its focus in recent years to its B2B service called Amazon Web Services (AWS). The total revenue contribution of AWS in the companies balance sheet amounted to $6.1 billion in 2018.

While AWS has a 41.5% market share in the public cloud, competitor, Microsoft Azure, is catching up. In their latest earnings report, Microsoft reported that Azure grew at 89% over the 2018 year. This is a growth rate almost double that of AWS.

The shift from On-Prem

Microsoft Azure currently shares 29.4 % of the market whilst Google has a minor 3%. Other players such as IBM and Rackspace make up the remaining 25% of the market.

Despite Azure’s growth, Amazon is still the preferred cloud platform with 80% of enterprises that are running or experimenting with AWS preferring it. Both Microsoft and Amazon had increased adoption rates, proving that enterprises and businesses are gradually shifting their data to the cloud. Possibly due to CIO’s and business leaders understanding the transformational aspect cloud computing has on their business.

Some examples of enterprises moving to the cloud include:

  • Capital One (an American Bank) hosts its mobile app on AWS
  • GE Oil & Gas is migrating most of its computing and storage capacity to the public cloud to reduce risk and optimize cost
  • Maersk is migrating legacy systems to the cloud to optimize processes whilst enabling business intelligence and AI to streamline its operations.

The biggest growth in SaaS yet

Microsoft leads the SaaS market with a 17% market share and an annual growth of 45%. The total enterprise SaaS market is presently generating $20 billion in quarterly revenue.

According to Synergy Research Group, Salesforce has majority market share when it comes to CRM. However, the CRM segment is growling relatively slowly in comparison to other segments.

If we look at SaaS based on industry verticals, majority of cloud adoption lives in the financial services industry with an adoption rate 19%, this is commendable when compared to other verticals such as insurance and healthcare. However, enterprise cloud adoption remains low at 20%.

Using Data Virtualsation to Simplify Machine Learning

By | Blog | No Comments

Having all your data in one place doesn’t necessarily make finding things easy, in fact, most of the time it’s like finding a needle in a haystack.

People often call data on the oil of the technology age. It’s a very valuable commodity that drives organisations everywhere. The volume and variety of data that flows through organizations today are so vast that data lakes are now one of the principal data management architecture. According to Forbes “A data lake holds data in an unstructured way and there is no hierarchy or organization among the individual pieces of data. It holds data in its rawest form—it’s not processed or analyzed.” This, interestingly, is supposed to make data easier to find and reduce time spent by data scientists on selection and integration. An added benefit is that data lakes provide massive computing power, thus allowing data to be transformed to meet the needs of processes that require it.
A recent study proved that organisations that applied data lakes outperformed their peers by up to 8%. However, most businesses struggle when it comes to applying machine learning to these data lakes to gain insight from the data. In fact, the majority of data scientists spend 80% of their time on this task, it’s time for a change.

Despite what one would think, having your data all in one physical place does not make finding it easier. Storing data in its raw form requires it to be adapted for machine learning, and that burden falls on data scientists. The past few years have brought out tools that help these scientists with integration but there still remain tasks that require a more advanced skill set.
To address these issues, data virtualization is needed.

Primarily, data virtualization allows data scientists to access more data in the format that they prefer. It provides one single access point to any data, regardless of its location or format. This applies different logical views of the same physical data without the need for replication. In doing so, data virtualization offers fast and inexpensive ways of using the data to meet the needs of different users across an organization.

Data virtualization doesn’t require data to be replicated (with just data lakes in a business’s architecture, you do require data replication) so new data can be added more quickly. The best data virtualization tools will also allow a searchable catalog of all available data sets including extensive metadata.
By employing DV, IT data architects can create ‘reusable logical data sets’ that expose information in ways useful for different specific purposes. Data scientists can then adapt these reusable data sets to meet the individual needs of different Machine Learning processes and, by allowing them to take care of complex issues such as transformation and performance optimisation, data scientists can then perform any final, and more straightforward, customisations that might be required.

Is Data Virtualisation the key to Machine Learning?

By | Blog | No Comments

It used to be said that technology was the lifeblood of the organization, which evolved to connectivity and now it is data – regardless of industry or size. Thanks to the evolution of technology, everything is captured, and we now have a multitude of data. If this data is used correctly, it has the possibility of improving any business greatly.

This all makes data increasingly valuable and as a result, data lakes (repositories that allow organisations to store all their structured and unstructured data) have become popular when working on a businesses data management architecture. By storing all data in data lakes, organisations can easily access their data and save business time and money. These lakes also allow businesses to have access to a range of business insights that allow them to make well-informed business decisions. Using machine learning on this data allows business to forecast outcomes and achieve the best results.

Despite all these benefits, businesses are still struggling when it comes to integration as well as data discovery. Storing data in its original format does not take away the need to adapt it for machine learning. Having all your data in one physical place is like trying to find a needle in a haystack. On top this, many organisations use various data storage solutions such as on-premise servers, the cloud and data centres so it is more like trying to find a needle in several hay stacks.

Fortunately some tools have come into the market to assist in integrating all this data, however, more complex tasks need a complex skill set and that’s where data virtualization comes into play. Data virtualization provides one access point to access any data – regardless of format. If implemented correctly it can stitch together various bits of data from multiple sources in real-time. It removes the need for data to be replicated into one location in order for a business to read and gather insight.

As machine learning and big data continue to grow and support modern business decisions, data virtualization is enabling businesses to seamlessly represent their data.