The pace at which human-machine technologies are evolving and getting seamlessly integrated into our lives is unimaginable. Here’s a look at the top technologies that are impacting the way we think, love and communicate.
Artificial Intelligence (AI)
AI is the biggest trend that has grown exponentially. Combined with machine learning, AI has the potential to not only bring about enormous changes in business, it can help create solutions for prevalent social issues. A large number of organizations have begun to realize the direct implications of infusing AI in their operations. While the ground work in AI has already begun, the times ahead are exciting as the ideas and prototypes are taking shape in real projects across multiple sectors – healthcare, retail, construction, banking, manufacturing, etc.
AI-powered software improves the efficiency of document analysis for legal use and machines can review documents and flag them as relevant to a particular case. Once a certain type of document is denoted as relevant, machine learning algorithms can get to work to find other documents that are similarly relevant. Machines are much faster at sorting through documents than humans and can produce output and results that can be statistically validated. They can help reduce the load on the human workforce by forwarding on only documents that are questionable rather than requiring humans to review all documents. It’s important that legal research is done in a timely and comprehensive manner, even though it’s monotonous.
Today, AI is an umbrella term that encompasses everything from robotic process automation to actual robotics. It has gained prominence recently due, in part, to big data, or the increase in speed, size and variety of data businesses are now collecting. AI can perform tasks such as identifying patterns in the data more efficiently than humans, enabling businesses to gain more insight out of their data. According to Constellation Research, by 2025, the AI market will surpass $100 billion and IDC states, by 2018, 75% of developer teams will include AI functionality in one or more application or service.
Internet of Things (IoT)
“Things” in the IoT sense, can refer to a wide variety of devices such as heart monitoring implants, biochip transponders on farm animals, cameras streaming live feeds of wild animals in coastal waters, automobiles with built-in sensors, DNA analysis devices for environmental/food/pathogen monitoring, or field operation devices that assist firefighters in search and rescue operations. Legal scholars suggest regarding “things” as an “inextricable mixture of hardware, software, data and service”.
These devices collect useful data with the help of various existing technologies and then autonomously flow the data between other devices.
There seems to be a general consensus that term “the Internet of things” was coined by Kevin Ashton of Procter & Gamble, later MIT’s Auto-ID Center, in 1999. The first written and referable source that mentions the Internet of Things seems to be the White Paper published by the MIT Auto-ID Center in November 2001 (but made public only in February 2002), which cites an earlier paper from October 2000.
The first research article mentioning the Internet of Things appears to be, which was preceded by an article published in Finnish in January 2002. The implementation described there was developed by Kary Främling and his team at Helsinki University of Technology in Finland. Contrary to the rather RFID and Supply Chain Management view of the Internet of Things, the vision of the Internet of Things presented there was closer to the modern one, i.e. an information system infrastructure for implementing smart, connected objects.
It has been predicted that IoT will comprise 200 billion ‘smart’ devices by the year 2020. That roughly equals 26 ‘smart’ devices per person on the planet. These smart devices would need to be collaboratively connected through the internet, converting the devices into one big integrated system driving a major shift in human-machine interaction. Implications of this technology are immense, which can be as simple as a smart home to an entire city! At present, IoT is being used by businesses to increase process efficiencies and deliver better customer experiences, thus generating new revenue streams.
Blockchain is a digital ledger that provides a secure way of making and recording transactions, agreements and contracts. Coupled with cryptocurrencies, it has incited a media frenzy, however, the opportunities it presents for business is yet to be truly explored.
After the initial hype around blockchain in the financial services’ industry, we are seeing many more potential use cases for the government, healthcare, manufacturing, supply chain/logistics, F&B and other industries. In 2018, many blockchain technology platforms will move from development phase to pilot phase in the banking, media and industrial sectors.
Information held on a blockchain exists as a shared — and continually reconciled — database. This is a way of using the network that has obvious benefits. The blockchain database isn’t stored in any single location, meaning the records it keeps are truly public and easily verifiable. No centralized version of this information exists for a hacker to corrupt. Hosted by millions of computers simultaneously, its data is accessible to anyone on the internet.
Anything that happens on it is a function of the network as a whole. Some important implications stem from this. By creating a new way to verify transactions aspects of traditional commerce could become unnecessary. Stock market trades become almost simultaneous on the blockchain, for instance — or it could make types of record keeping, like a land registry, fully public. And decentralization is already a reality.
A global network of computers uses blockchain technology to jointly manage the database that records Bitcoin transactions. That is, Bitcoin is managed by its network, and not any one central authority. Decentralization means the network operates on a user-to-user (peer-to-peer) basis. The forms of mass collaboration this makes possible are just beginning to be investigated.
Cloud based architecture has been around for quite a while now but in 2018, we expect to see many more organizations take advantage of the simplicity and high-performance the cloud guarantees, as well as a sharp uptick in cloud-hosted Software-as-a-Service (SaaS) that provides an opportunity to grow computing capabilities without costly investments in infrastructure, physical or technical.
Cloud computing resources are delivered by server-based applications through digital networks or through the public Internet itself. The applications are made available for user access via mobile and desktop devices. This much is pretty obvious.
According to the National Institute of Standards and Technology (NIST), these are the five specific qualities that define cloud computing. Like on-demand self-service; broad network access; resource pooling; rapid elasticity or expansion and measured service.
This is the visible interface that computer users or clients encounter through their web-enabled client devices. But it should be clear here that not all cloud computing systems will use the same user interface.
On the other hand, back end is the “cloud” part of a cloud computing architecture, comprising all the resources required to deliver cloud-computing services. A system’s back end can be made up of a number of bare metal servers, data storage facilities, virtual machines, a security mechanism, and services, all built in conformance with a deployment model, and all together responsible for providing a service.
It is the primary authority and responsibility of the back end to provide a built-in security mechanism, traffic control, and protocols. The operating system on a bare metal server – known popularly as a hypervisor – makes use of well-defined protocols allowing multiple guest virtual machines to run concurrently. The hypervisor guides communication between its containers and the connected world beyond.
The server virtualization methodology used by hypervisors bypasses some of the physical limitations that stand-alone servers can face. Virtualization allows software to trick a physical server into thinking it is in fact part of a multiple server environment, and therefore capable of drawing on extra, otherwise underutilized, capacity.
As the numbers of services hosted by a cloud computing provider grow, the demands of higher traffic and compute loads that obviously grow with it must be anticipated and accommodated. But exponentially growing demands for storage space can’t be ignored.
To properly maintain and protect a client’s data, a cloud computing architecture requires greater redundancy than might be needed for locally hosted systems. The copies generated by this necessary redundancy allow the central server to jump in and access backup images to quickly retrieve and restore needed data.
In a cloud computing architecture, all applications are controlled, managed, and served by a cloud server. It’s data is replicated and preserved remotely as part of the cloud configuration. A well integrated cloud system can create nearly limitless efficiencies and possibilities.
Augmented Reality (AR) and Virtual Reality (VR)
Worldwide spending on AR and VR is expected to reach $17.8 billion in 2018, which is 95% more than the $9.1 billion spent in 2017. Businesses are quickly understanding the potential of AR and are integrating the technology into their business and marketing plans to make most of the first-mover opportunities available. Industries like telecommunication, manufacturing, energy where the workforce is scattered to remote areas are already using AR extensively for communication, training, etc. The scope is even greater in healthcare where AR can be used in the operating room by bringing the virtual elements to the real world.
F8 2018: Open AI Frameworks and New AR/VR Advancements
F8 focused on the long-term technology investments in three areas: connectivity, AI, and AR/VR. Chief Technology Officer Mike Schroepfer kicked off the keynote, followed by Engineering Director Srinivas Narayanan, Research Scientist Isabel Kloumann, and Head of Core Tech Product Management Maria Fernandez Guajardo.
From advances in bringing connectivity to more people throughout the world to state-of-the-art research breakthroughs in AI to the development of entirely new experiences in AR/VR, Facebook continues to build new technologies that will bring people closer together and help keep them safe.
At F8 artificial intelligence research and engineering teams shared a recent breakthrough: the teams successfully trained an image recognition system on a data set of 3.5 billion publicly available photos, using the hashtags on those photos in place of human annotations. This new technique will allow researchers to scale their work much more quickly, and they’ve already used it to score a record-high 85.4% accuracy on the widely used ImageNet benchmark.
This image recognition work is powered by our AI research and production tools: PyTorch, Caffe2, and ONNX. They also announced the next version of our open source AI framework, PyTorch 1.0, which combines the capabilities of all these tools to provide everyone in the AI research community with a fast, seamless path for building a broad range of AI projects. The technology in PyTorch 1.0 is already being used at scale, including performing nearly 6 billion text translations per day for the 48 most commonly used languages on Facebook. In VR, these tools have helped in deploying new research into production to make avatars move more realistically.
The PyTorch 1.0 toolkit will be available in beta within the next few months, making Facebook’s state-of-the-art AI research tools available to everyone. With it, developers can take advantage of computer vision advances like DensePose, which can put a full polygonal mesh overlay on people as they move through a scene — something that will help make AR camera applications more compelling. For a deeper dive on all of today’s AI updates and advancements, including our open source work on ELF OpenGo, check out the posts on our Engineering Blog or visit facebook.ai/developers, where you can get tools and code to build your own applications.
Facebook’s advancements in AR and VR draw from an array of research areas to help us create better shared experiences, regardless of physical distance. From capturing realistic-looking surroundings to producing next-generation avatars, we’re closer to making AR/VR experiences feel like reality. Our research scientists have created a prototype system that can generate 3D reconstructions of physical spaces with surprisingly convincing results. The video below shows a side-by-side comparison between normal footage and a 3D reconstruction.
Realistic surroundings are important for creating more immersive AR/VR, but so are realistic avatars. These advances in AI and AR/VR are relevant only if you have access to a strong internet connection — and there are currently 3.8 billion people around the world that don’t have internet access. To increase connectivity around the world, we’re focused on developing next-generation technologies that can help bring the cost of connectivity down to reach the unconnected and increase capacity and performance for everyone else.
To conclude, half of the year has gone by and it already looks very promising for technological innovations. The exponential improvement of technologies will not only allow every business industry to flourish but also impact our daily lives.