How Does AI Impact Engineering Education

Machine learning is transforming software development. Can academia keep pace?

On paper, new disciplines in computer science and electrical engineering such as deep learning, facial recognition, and advanced graphics processing, look easy to exploit for universities wishing to update their STEM curricula. After all, the business press is awash with gushing propaganda on vertical applications for neural networks and pattern-recognizers exploiting big data sets. A broad-based academic institution could pick domains in which they excel, such as medicine or industrial automation, and apply emerging chip and subsystem architectures to the writing of dedicated applications for those vertical domains.

Easy, right? The model has worked before in communications and embedded processing.

But academia will hit fundamental speed bumps in coming years that could stymie efforts to develop effective criteria. The most obvious problem is in simple expansion of high-level software languages in vertical domains. We’ve already seen the colloquial wonderland of “coding,” which has been the magical mantra for institutions wishing to excel in software, as well as for parents of students wanting their offspring to get a good job as a code jockey. However, as AI penetrates further, all that touted training in coding will become no longer that important.

Already, fields in gaming and embedded processing have turned from well-defined environments written in C++, JavaScript, or Perl, to self-generating code modules in which the talents of software programmers come close to being marginalized. In the next generation, leading platforms in deep learning and pattern recognition will dispense with the need for human programmers, because such hardware platforms are not programmed in the traditional sense. They are trained from large data sets at the machine level, so the need for high-level language virtually disappears. AI developers have been warning about this shift for more than a decade, but few CS/EE schools have moved to prepare for this change in their curricula.

This trend to generative modules of software does not mean the death of traditional programming languages, but rather the relegation of such code to an embedded-like status. In fact, the danger in relying on high-level modules and non-programmed training platforms is that the collective knowledge of writing in compiled or object-oriented languages could fade, just as the knowledge of BASIC or FORTRAN has disappeared.

Hence, educators must strike a balance between preserving an institutional knowledge of high-level programming, while disabusing students and families of the idea that a lucrative career can be built around C++ or Perl expertise.

We don’t know what we know and don’t know
A far bigger problem is only beginning to dawn on engineering departments. Academic institutions, like non-profit corporations and government agencies, face growing demands for compliance testing, outcome-based quantification of results and the ability to replicate course work for applications in new domains.
Yet, leading AI researchers cannot explain how their systems reach optimal results to difficult problems.

In some early use cases like autonomous vehicles, the U.S. Department of Transportation has encountered the simplest version of the problem with “black box” data inputs. If the data sets do not correspond to the real world in any one dimension, or along multiple vectors, a system like a self-driving car could fail catastrophically. This challenge of “adequacy and completeness of a large multivariate data set” can be a deal-breaker for deep learning in several vertical domains, yet it is only the simplest version of a far bigger problem stemming from the very nature of neural networks.

The type of neural network that has been most successful in untrained learning uses multiple, hidden layers of convolutional connections among many simulated neurons. AI researchers can explain in general terms the type of back-propagation and genetic algorithms used within the hidden layers, but they can’t explain in detail how the neural network reaches its conclusions, and how the network adjusts its synaptic weights to improve its answers.
In short, the best engineering and mathematical minds in the industry have no idea why neural networks work so well, and there is little prospect of human AI experts understanding this in the future, let alone explaining the results to a lay audience.

Computer historian George Dyson says this concept is so fundamental to deep learning, to wit: “Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.”
Many of his colleagues in the AI community have informally dubbed this “Dyson’s Law,” and some are worried that the search for “explainable AI” may itself be an example of the type of math problem called non-deterministic polynomial-time complete, or NP-complete. (The problem akin to the traveling-salesman map, where the difficulty of identifying optimal solutions across several variables rises exponentially with additional data points, and a problem quickly becomes unsolvable in what is known as polynomial time.)

If an EE professor with a deep background in math cannot understand how a deep-learning platform works, how will such a gap in knowledge be greeted by department heads controlling purse strings?
An algorithm researcher with a background in large data sets was working on a contract basis with a Midwestern university attempting to apply broad pattern-recognition to the retrieval of medical records. (The researcher asked not to be identified for fear of retribution.) She blames the nature of the industry as much as the over-ambition of the university administrators.

“Are department heads trying to jump on a new trend with only incomplete knowledge of how it might be applied to their discipline? Of course. They always do,” she said. “But deep-learning AI is unique because the answers that will be demanded by a university’s audit staff don’t exist anywhere. The best minds around say, ‘This seems to work, but we don’t really know how the hell it does,’ and that just isn’t an answer that’s going to be accepted by many administrators.”

A startup founded by University of Waterloo researchers hopes to simultaneously make individual neural layers more transparent and easier to explain, while also giving academia another metric to ease the development of vertical applications in deep learning. DarwinAI, founded by Alexander Wong, uses a method called “generative synthesis” to optimize synaptic weights and connections for small, reusable “modules” of neural nets.

DarwinAI touts its toolkit as not only a faster way to develop vertical applications, but also as a way to understand how a vertical deep-learning app works — and to make those apps more practical to develop for small companies and academic institutions. Intel Corp. has validated the generative synthesis method, and is using it in collaboration with Audi on autonomous vehicle architectures. The early involvement of Intel and Audi, however, shows that academia must jump quickly if it wishes to exploit toolkits that seek to level the playing field.

Who controls the research?
The lack of accountability does not stymie all academic research, as attested by dozens of new research grants sponsored by National Science Foundation along with private institutions. Still, grant winners worry they are gleaning mere scraps in a discipline that favors the large existing data centers of Microsoft, Facebook, Alphabet/Google, and Amazon Web Services. A Sept. 26 study in The New York Times , citing in part a review of the field from Allen Institute of AI, suggests that universities will always lag in AI because of their inability to approach, by even a fraction, the compute power of centralized corporate data centers. Part of the problem stems from the exponential acceleration of the number of calculations necessary to run basic tasks in deep learning, which the Allen Institute and OpenAI estimate has soared by a factor of 300,000 in six years.

Even the neutral arbiters are falling under corporate control. The Allen Institute, founded by Microsoft founder Paul Allen, has remained independent, but OpenAI, another analytical AI firm financed by Elon Musk and other leading executives, became a for-profit company in early 2019, receiving a $1 billion investment from Microsoft as a means of gaining a guaranteed source of compute power. OpenAI remains respected for its work in analyzing platforms, but independent analysts worry about the continuing independence of OpenAI.

NSF has meanwhile tried to encourage more independence among software developers, where academia’s role can in theory be protected. The National Center for Supercomputing Applications at University of Illinois Champaign-Urbana recently received a $2.7 million grant from NSF to develop a deep-learning software infrastructure, though even this effort requires tight links with IBM and Nvidia. The specific instances where university centers are financed by NSF still have corporate partners that can skew the research. The strong role played by Intel, Mellanox, Dell EMC, IBM, and Nvidia in the Texas Advanced Computing Center at the University of Texas at Austin is another example.

Waiting for clarity – and explanations
The incremental steps made by startups such as DarwinAI may be harbingers of a new era where deep learning is understood on a finer-grained level, and can be exploited more easily by academic researchers. Unease over the direction and control of AI research may eventually turned out to be as overblown as Defense Department dominance of tube-based computing 70 years ago.
What is clear, however, is that traditional academia models for incorporating CS/EE projects into overall STEM goals are sorely in need of an update. Software education programs based on traditional high-level languages may be so outdated that they could well be scrapped — used only as a means of providing programmers with historical background.

For now, universities are unlikely to cobble together realistic deep learning research projects until better means of explaining the results of neural-network training emerge. Even after these platforms become more understandable, universities will have to examine vertical domains and specialized optimization methods where their projects will not have to compete head-to-head with corporate data centers with far larger computing resources than even the most well-endowed universities.

India- Industrial AI

“The kind of innovation India’s industrial AI startups have is unparalleled across the globe. Surprisingly, such innovation is not stemming out of the Bay area,” said Derick Jose, co-founder of Flutura, a global leader in applying industrial AI for energy and engineering applications. “Our strength is the cluster-oriented talent we have been able to hone. For example, Bengaluru has defence as a vertical, Pune has auto and manufacturing, and Chennai has auto and original equipment manufacturers (OEMs). They are also thriving ecosystems where engineering prowess and industry intersect.”

Bengaluru, even before the advent of IT, was a defence and aeronautics cluster. There was a very strong engineering base in the southern Indian city. Then slowly, we saw the evolution of an intersection of analytics and engineering. A great example of this is Sunlux Technologies, which manufactures sensors for the Indian Navy.

“In every hi-tech hub of developed innovation economies, the rise of the modern-day tech entrepreneurship ecosystem (startups and venture capital) happened under the larger canopy of an advanced industrial innovation ecosystem,” said Vishwanathan Sahasranamam, co-founder and CEO of Forge Accelerator. “This ecosystem powered and pioneered the development and commercialisation of advanced technologies in creating globally competitive solutions for the most challenging industrial sectors. The greatest contribution of this being the creation of highly competent technologists, who, with the vision to exploit the power of these technologies and the entrepreneurial ambitions, created the next generation of multibillion-dollar ventures.”

A 2016 report by McKinsey states that, while “feeling” prepared, only 30% of the technology suppliers and 16% of the manufacturers have an overall strategy for Industry 4.0 (the trend towards automation and data exchange in manufacturing technologies) in place, and only 24% have assigned clear responsibilities for that.

“India is known for its software prowess and we have the talent and capabilities in machine learning, computer vision, and deep learning,” said Tarun Mishra, co-founder of DeTech Technologies. “The next level is building the AI layer on top and India is front-ending this revolution. In many ways, we are the OEMs of this data and it is only organic for us to expand to the field of industrial AI.” DeTech Technologies is a Chennai-based startup incubated at the Indian Institute of Technology Madras in 2016. It works with leading global oil and gas companies.

What would help Indian startups in this space is larger government grants, akin to what the US, China, and Israel provide. Indian talent is also slowly warming up to this space, thus, engineering colleges must modify their curriculum to keep pace.

At a micro level, according to some of the founders of these startups, what has worked for them is:

“The startup story of India has largely been domestic till now. Thanks to the kind of global companies being built, industrial AI is emerging as a space where India can establish a leadership position in the global AI ecosystem,”

Aniruddha Bannerjee, founder, SwitchOn:“…show tangible return on investment. From day one, we work with customers who’re looking at numbers. Working with a demanding customer makes it easier to work with others.” SwitchOn builds AI solutions for auto and FMCG firms.

Harsimrat Bhasin, co-founder of Neewee AI: “Bodhee’s value proposition of delivering analytics outcomes on the shop-floor and its holistic approach of integrating silos in the manufacturing value chain has helped us gain faster traction.” Bodhee is the company’s industrial analytics product with an AI solution that predicts industrial outcomes.

Leveraging AI for quick claims settlement, says Anamika Roy Rashtrawar of IFFCO Tokio General Insurance

Technology will play a key role in extending the reach of insurance products to the remotest parts of the country, says Anamika Roy Rashtrawar, Wholetime Director, IFFCO Tokio General Insurance. In an interview with Naveen Kumar of Money Today, Rashtrawar shares her views on recent developments in the general insurance space and company’s future plans that will have significant impact on customer experience.

Money Today: Do you think technology will play a bigger role in driving insurance penetration in India, considering the affordable data plans and the growing number of smartphone users?

Anamika Roy Rashtrawar: Yes, indeed. Technology has been a key driver to promote insurance products in recent times. Going forward, as more and more customers, especially from Tier-IV towns and rural India, get access to the internet through high-speed networks, technology will help extend the reach of insurance products even to the remotest parts of the country. Increasing use of smartphones will further facilitate the purchase of insurance products online and in servicing claims.

What has been the role of digitalisation in your organisation so far?

Rashtrawar: Our focus has always been the customer. So, everything we have been doing, both in terms of helping the customer buy a policy and servicing claims, digitalisation has played a key role in simplifying the processes and operations. We have developed user-friendly mobile apps to reduce the turnaround time for every request that comes through. Now, the policyholders can even download their policy documents, apply for claims and track the claims status with a few clicks.

Technology, as you said, has been key to increase the insurance penetration in India. How do you propose to innovate further with technology as the central theme?

Rashtrawar: We have leveraged artificial intelligence, or AI, for quick settlement of auto insurance claims for individual car owners. Now, we plan to extend AI to cover all car models and two-wheelers. That apart, going forward, AI and data analytics could be the backbone of our tech infrastructure, which will allow us to detect and prevent fraud, especially in the areas of motor and health insurance.

We have already introduced RPA (robotic process automation) for motor insurance claims surveyors. We intend to automate various other processes, which are repetitive in nature, to build efficiency in servicing our clients.

We also plan to adopt customer Contact Centre Management and automate the call centre functionalities by integrating the backend systems directly with the IVR (Interactive Voice Response) system.

How much do the digital-only channels contribute to your overall business? What has been the growth trend?

Rashtrawar: In the first-half of this financial year ended September 30, 9 per cent of the total policies were bought digitally, on the back of 133 per cent year-on-year growth. Motor and health insurance products have been the key drivers of our digital business. For other products, customers prefer traditional modes of distribution.

What has been the impact of the demand slowdown in auto sales on the motor insurance sector?

Rashtrawar: The auto industry has been going through a tough phase with most automobile manufactures witnessing a fall in sales. This has had a domino effect on the motor insurance sector, considering that premium for new cars makes up for a significant portion of the business.

There has been a lot of noise around a usage-based pricing mechanism, or pay-as-you-use insurance products, for the motor insurance category. Will the rise in digital penetration in India make it a reality in the near future?

Rashtrawar: Yes, the concept of pay-as-you-drive, which is prevalent in the Western world, can become a reality for Indian consumers as well. But, it may have to be tweaked according to the Indian conditions, wherein premium rates will vary based on various parameters, including the driving behaviour of the customer.

Technology will allow insurers to have a more accurate view of how one drives, and it will help lower the premium for policyholders who are responsible drivers with a safety-first approach. On the other hand, policyholders with high risks can be charged a higher premium based on their actual driving experience.   

Is lower premium still the dominant influencing factor in the buying decision of a customer, or is he/her willing to pay a premium for value-added services, including for a hassle-free claims settlement process?

Rashtrawar: Customers are now willing to pay a premium for value-added services. Their decision is no longer solely driven by premium rates. Some value-added services go beyond the insurance coverage, offering a host of services beyond the typical health or motor insurance policy–for instance, we provide roadside assistance and compensation for medical expenses of the vehicle’s occupants in case of an accidental injury.   

Lacework: The New Generation of Cloud Security Solutions for Enterprise DevOps Teams is Here

The cloud has become an increasingly popular infrastructure for all types of organizations, and with this growing popularity makes the cloud a big target for hackers. Thus, it’s imperative to protect it from hackers, account hijacks, data breaches, cryptojacking, malware, theft, and other malicious activities from a growing number of bad actors.

It should go without saying that your cloud, containers and native applications should be speedy and yet safe and secure from both external and internal threats. Old school cloud security solutions are out, and new generation, complete security platforms like Lacework are in.

The public cloud has been great for enabling enterprises to automatically scale their workloads, deploy them faster, and build them more freely. This has helped to support their desires for speed and scale, but it has this has also made it increasingly difficult for these same organizations to make sense of the activity happening within their cloud environments.

However, emerging cybersecurity companies like Lacework who provides enterprise security teams with a lightweight agent that provides visibility to all the processes and applications within their organization’s cloud and container environments. The immense breadth and depth of visibility helps detect vulnerabilities, while machine learning analysis is utilized to identify anomalous behavior that poses threats.

Who is Lacework?

Lacework is a leading cybersecurity company who provides organizations with a zero-touch, end-to-end cloud security platform that offers a comprehensive, secure and automated solution that gives enterprise businesses and DevOps teams complete control over their cloud network.

The technologies that powers Lacework SaaS platform are backed by advanced machine learning and automation in order to provide organizations unparalleled defence against a growing number of vulnerabilities, data security issues and online threats within APIs, file systems, applications, containers, and workloads across all their native and multi-cloud environments.

What Does Their Complete Cloud Security Platform Cover?

Security Visibility

Get deep observability into all of your organizations cloud accounts, workloads, and microservices to give your DevSecOps team tighter security control over your cloud environments.

Host Intrusion Detection

Host-based IDS identifies breaches and automatically sends notifications so security teams can analyze and address them immediately.

Threat Detection

Empower security teams to identify common threats that specifically target your cloud servers, containers, and IaaS accounts so you can action on them before your company is at risk. With runtime threat defence, security teams can easily check for vulnerabilities of their cloud workload and container environments.

File Integrity Monitoring

You won’t have to worry about your cloud security not catching up to the speed of implementations. The software’s FIM (File Integrity Monitoring) technology automates the process via a monitoring agent which checks for each file’s integrity, thereby eliminating the need for intensive management and rule development.

Anomaly Detection

Easily detect and resolve any anomalous changes in behavior across your cloud workloads, containers, and IaaS accounts that represent a security risk or an IOC.

Configuration Compliance

Quickly spot IaaS account configurations that may violate compliance & cloud security best practices that could put your company at severe risk of a data breach.

Account Security

Continuous monitoring per cloud account can be a tedious affair, but you can count on Lacework to monitor and secure GCP, Azure and AWS accounts for insights and configuration changes that sometimes act as a warning signal to online threats.

Container Security

This solution is fully container-aware and monitors all container activities regardless of the container distribution devops teams rely on such as Docker and/or Kubernetes. If there is any malicious activity in a containerized environment, it will generate an anomaly at one layer or another. This is where Lacework’s threat detection and behavioral analysis helps to identify the anomalous activities across cloud containers so issues can be taken care of before any major damage is done to the organization.

Cloud Workload Security

Their lightweight agents collect and send data to Lacework’s backend in the cloud where this data is aggregated, and a baseline of the activity in the cloud environment is created. The automated method of detecting undesired activity in cloud and container workloads provides great benefits over traditional rule writing.

Kubernetes Security

Security teams can get deep visibility into their Kubernetes deployment. This enhanced visibility includes high-level dashboards of their Kubernetes clusters, pods, nodes, and namespaces that are combined with application level communication between all of these at the application, process, and network layer to provide more advanced security of your Kubernetes environments.

In Summary

Lacework’s complete cloud security platform provides organizations of all types and sizes with unprecedented visibility, intrusion detection automation, while delivering one-click investigation, and simplifying cloud compliance across Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) – all while providing security teams with the most comprehensive view of vulnerabilities and risks across their cloud workloads and containerized environments.Top of Form

Bottom of Form If security is a big concern for your organization, you should consider looking more into Lacework today to see if their complete security platform is best for your security objectives. You can get a free security assessment and detailed custom report identifying cloud compliance misconfigurations, vulnerabilities, anomalies, or even hidden threats within your native and/or multi-cloud environments.

Public Cloud- What’s Happening?

Go to any industry outside of the telecommunications (telco) industry and ask any CTO, “What has been the most disruptive, technical innovation that impacted your business in the last decade?” The answer will invariably include the incredible disruption of the public cloud. Companies like Netflix, Spotify and Lyft would not exist, and certainly, our lives would not be better off without them, had public cloud not become a reality in the mid-2000s. But the public cloud is nowhere to be seen in telco. Instead of operators implementing those learnings in our industry, I often hear, “I can’t move to the public cloud.”

Likewise, I read article after article about operators struggling financially, and they don’t know what to do next. With up-and-coming challenger brands nipping at their heels, the biggest idea for cost savings might be staring them right in the face: Move their workloads to the public cloud — and soon.

So what are the most common misconceptions when it comes to the public cloud and telco? Let’s take a look at the top three.

Myth 1: Public cloud is more expensive.

By moving to the public cloud, some operators have been able to save almost 80% on the total cost of ownership (TCO) of running a complex software application. To truly understand the cost savings that telcos can achieve, you must consider the entire stack — from the ground on which servers sit to the people who manage various applications.

The TCO here includes the cost of installation, database licenses, hardware, power, management time and many other factors. With the public cloud, you don’t need to buy hardware, and there are no data center costs, such as real estate, power operation, security and staffing. You are able to eliminate expensive third-party database software and remove disaster recovery and static testbeds. Also, traditional monolithic applications running on-premise result in slow testing and release procedures.

PROMOTED

The public cloud frees operators from all of these requirements, empowering them to be more agile and reduce TCO. Of course, some operators may already have a significant investment in on-premise servers for the private cloud. However, those that fail to migrate core functions to the public cloud stand to lose their competitive market position in the long run.

Myth 2: Public cloud is not secure.

Today’s telecom operators increasingly are expected to know and comply with security and privacy laws that change radically across different countries, regulatory bodies and even policies within individual companies. It’s usually the first objection raised about moving to the public cloud.

Many telco CTOs don’t know that cloud offerings from large cloud providers, like Google, Amazon and Microsoft, must meet stringent guidelines for security and privacy. For instance, Google, a technology partner of ours, spent $9 billion last year alone to build data centers and implement security, earning it a place as a Forrester Wave cloud platform security leader. Telcos cannot match that level of investment in innovation and security. Moreover, Gartner anticipates that public cloud workloads will see 60% fewer cybersecurity incidents than standard data centers through 2020.

What public cloud features demonstrate superior security? Look for vendors that commit to the following:

  1. Layered defense with strong perimeters and surveillance on cloud data centers.
  2. Highly controlled access, preventing any unauthorized partner, vendor or employee from entering with defense depth controls at every layer.
  3. Encryption at rest by default.
  4. Transparency through frequent third-party certifications and audits.

Myth 3: If it runs on a cloud, you get all the benefits of a public cloud.

There’s a tendency to use the term “cloud-native” without truly understanding what it is or its benefits. For some telco CTOs, vendor terminology such as “cloud-ready” can be confusing and misleading, and making the transition to the cloud can require a steep learning curve. Yet, taking the time to educate yourself can help ensure that the strategies you’re considering are genuinely fit for your needs. An operator that can only run in the private cloud — or that drops existing software architecture onto Azure, AWS or GCP — is not cloud-native. This type of “lift and shift” exercise cannot leverage the full benefits of the public cloud.

Truly cloud-native applications must be architected with 21st century technology. If an application that claims to be “cloud-ready” or “cloud-native” still requires an old-school Oracle relational database, it’s not cloud-native. Applications that are truly suited to the cloud will take advantage of a containerization approach to development, such as Kubernetes or Docker Swarm.

Further, truly cloud-native applications are faster and more powerful, and they enable operators to avoid traditional license fees.

Revealing The Truth

Given the popularity of running enterprise workloads in the cloud, it’s not surprising that telecom operators are beginning to recognize the importance of cloud migration. It’s not only crucial to understand the different types of cloud offerings, but you must also realize how those differences will impact your business and your bottom line.

For example, telcos may decide to take an interim half-step and move their enterprise software applications onto a self-provisioned and self-managed private cloud. By using containerization systems such as Kubernetes, you could take this step to maximize your investment in any hardware or database licenses you may already own.

Ongoing advances in technology have presented operators with a series of new opportunities, along with the inevitable accompanying challenges. As demand for data continues to grow, requiring new network capacity, and as traffic patterns become increasingly unpredictable, new solutions are needed. In a recent Ovum report, analyst Roy Illsley wrote, “Ovum believes that for telecoms companies to be able to compete in the digital world, the adoption of public cloud is inevitable.”

With the current explosion in IoT connectivity and several 5G networks set to launch this year and in 2020, application data will grow exponentially. Now is the time for telecommunications leaders to dispel the myths and make the move to the public cloud.Forbes Technology Cou

Cloud Market

In 2019, the public cloud computing market is set to reach $258 billion (USD). Amazon Web Services(AWS) is dominating this industry with a 32% share. Its competitors are lagging behind, with just 16.8% for Microsoft Azure, and 8.5% for Google Cloud computing services.

Cloud adoption is becoming an horizontal transformation that does not involve large corporations alone. With 57% of the global spending on public cloud services, the U.S. still remain the pace-setter in cloud adoption.

However, more small to medium-sized businesses (SMBs) are forming their presence online not just in the U.S. but in most other highly-developed countries as well.

For example, in Canada, which is the country with the highest Internet penetration among G20 nations, 97% of enterprises plan on increasing their technology investment. Cloud services are particularly appealing to SMBs as they’re highly scalable and do not require huge capitals to implement the digital transformation.

Unsurprisingly, AWS is the favorite cloud services provider of SMBs with a 53% adoption rate.

However, this rate actually dropped from 60% in 2018, showing a significant negative trend which may be interpreted in many ways – the first of which may be Azure increased popularity. Microsoft’s cloud service rose from 32% to 41% during the same period, in fact.

This info may be somewhat misleading though, as AWS saw 2019 as an “investment year” and focused on implementing new technologies in all verticals – such as machine learning analytics, augmented realityartificial intelligenceInternet of Thingsserverless implementation and much more.

So far, so cool. In fact, after earning more than $7.6 billion in Q1 (more than twice of the $3.4 billion earned by Azure), AWS grew by 45% in Q4. And judging by its plans, AWS can quickly become a threat to Oracle on databases as well.

The top reason for cloud adoption still is providing data access from anywhere – a trend that matches perfectly our social technological evolution. In our highly interconnected society, we want access to technology at any time, from any place.

AWS attracts more than half (52%) of the early-stage cloud users, so it will likely get back all his positions as new companies of all sizes keep jumping on the cloud bandwagon. Other sectors that are likely to grow are Docker containers, whose adoption increased from 49% to 57% this year.

As Docker orchestration tools as Kubernetes are becoming more popular as well with a growth of 27% in 2019 from 48% in 2018, this trend is going to spike in the upcoming period.

The AWS container services’ adoption didn’t change this year (a flat 44%), while Azure Container Service is trying to bridge the gap with a 8% increase (from 20% to 28% between 2018 and 2019).

AI will disrupt white-collar workers the most, predicts a new report

A surprise finding: Conventional wisdom says robotics, artificial intelligence, and automation will radically alter work for blue-collar truckers and factory workers. In fact, white-collar jobs will be affected more, according to a new analysis by the Brookings Institution.

The analysis: Researchers looked at the text of AI patents, then the text of job descriptions, and quantified the overlap in order to identify the kinds of tasks and occupations likely to be affected. The analysis says that AI will be a significant factor in the future work lives of managers, supervisors, and analysts, shaking up law firms, marketing roles, publishers, and computer programming.

A caveat: Although white-collar workers are likely to take a significant share of the disruption AI and automation brings, they may find it easier to find re-train or find alternative roles, thanks to the fact they are more likely to live in cities or have college degrees. 

Design a site like this with WordPress.com
Get started