Republicii Street, No. 9, TM, Romania +40-256-201279
News

This week we focused on autonomous vehicles, inspired by this Innovation Enterprise post that targets the decisional factors inside organizations. All those concerned need to prepare in advance for the incoming changes that are set to affect industry after industry. Just to have an idea about the scale of the disruptions, their certainty and their character, let’s go through this topic, the way the mentioned publication approached it.

Autonomous vehicles will pervade the logistics and transportation fields, and the legal adjustments point to this

Firstly, we should notice that the article looks at the way autonomous vehicles progress in the US environment. Therefore the presented implications and legal changes are in relation with this geographical area, for now.

The Department of Transportation recently issued a new guidelines document of 70 pages. The officials stated that they changed the definition of “operator” and “driver”, to allow for AI-driven vehicles.

Based on this major change, those in charge of businesses in which such vehicles might play a role in the future should prepare in advance. Exactly how will autonomous vehicles impact such businesses? Advice for determining the future impact of the changes to come is included in the article.

Mapping out what companies will drive this type of change is yet another step in the process of getting ready for it. Autonomous vehicles come with AI (Artificial Intelligence). Companies such as Google, Tesla, or even Dominos Pizza made no secret out of their AI commitment.

Besides keeping an eye for the moves of such innovators/adopters, investing in stocks related to AI is a second option many will consider.

 

Going fully driverless means high technology and overcoming any security issues in autonomous vehicles

Analysts have estimated that going fully driverless is an option that could become real in transportation around 2030 or half 2040s. We are talking trucks and goods delivery, and the figures already point out considerable cost saving benefits and not only.

The road to this stage may seem long and surely it’s a winding one. However, the first official and regulatory step may well be the new definition of what a driver is.

On the other side, road safety and cyber security concerns show the need for high-performance, fault proof software and hardware. Those solutions that guarantee the safety of all those taking part in traffic are the only way to move forward.

Going fully driverless in transportation and delivery does not yet have consumer acceptance. Autonomous vehicles still have some way to go until convincing the majority of people they are in no way a risk on the road. In any case, they still have to prove not being a bigger risk than the vehicles with human drivers.

Autonomous trucks generally officially operate at what’s known as Level 2, an engineering standard that includes technologies such as automatic braking, acceleration, and some amount of steering. (Basic cruise control, by contrast, would amounts to Level 0, and certain features such as lane-assist or adaptive cruise control would be Level 1). However, autonomous trucks are often effectively operating at Level 4 – or “high automation,” with their safety drivers generally only taking over on local roads.
– Richard Bishop, an automated vehicles industry analyst, quoted by US News

Image credit: curbed.com

 

 

0

News

Recently, our colleague Petrut Ionel Romeo, Technical Project Manager, Lasting Software held the presentation entitled Satellite Communication solution for Cellular backhaul, as part of Codecamp Timişoara, the autumn edition.

In what is actually a solid business case for satellite – based mobile communications, he outlined the importance of this technology and its 2 main current key purposes, in a way that would make it clear and palatable by non-specialists and technical people altogether.

We will briefly recap the main ideas, leaving aside the more technical content. Nevertheless, let your imagination add to this material the context of a busy, dynamic conference, and you will have the image of a great day…

 

Satellite – related facts

Satellites (communication and weather), are situated in the geostationary orbit. This is to make it easier for the Earth-based antennae to maintain continuous communication. A geostationary orbit allows for the position of the orbiting object to remain constant.  To a stationary observer from Earth, it seems motionless.

In time, the number of satellites placed in the geostationary orbit became high enough to justify the new ring system made by all these devices being called the Geostationary Satellite Belt.

Having to control and maintain sophisticated pieces of technology at such a remote distance is a huge task. There are specific challenges linked to the system of satellites surrounding the Earth. Although by developing the technology at an impressive rate, the previous reach, maneuver and management issues have been solved in ways that were not possible a while ago, the physical premises (and not only) make satellite technology a highly demanding field.

Satellites operate in extreme thermal conditions, risk being hit by space debris, face the hostility of radiation belts and so on. Perhaps the simplest challenge to understand of them all would be that satellites are remote devices, with a specific procedure in place in order to access, maintain and repair.

By taking a look at the distance-related communication challenges, we may consider how:

  • The Clarke belt (the part of space in the plane of the equator, designated implementation area for near-geostationary orbits) is about 36.000km above sea level;
  • When speaking of communication back and forth, we are dealing with 72000km
  • Taking into consideration the speed of light, the resulting delay in transmissions is of approximately 250-270ms

However, despite these challenges, communication satellites also have a few particular traits that put them above any other communication-enabling technology located on Earth.

The unmatchable advantages of the communication satellites

Satellites are extremely important because in certain locations and/or circumstances, the terrestrial communication infrastructure is inefficient or even completely out of range/function.

There are 2 major (and we may add critical) situations where only satellite-enabled communications can support the flow of necessary data and messages.

  1. Geographically-isolated or architecturally-isolated spots
  • Often in rural and remote areas fiber and microwave transmissions are unavailable, as there is no business interest
  • Satellite communications are also efficient in mountains, deserts, islands, in other areas difficult to reach due to the landforms and/or structure
  • This type of communications may be of use in order to relieve congested urban areas: stadiums, malls, markets, academic centers
  1. Situations that require a solid communication infrastructure for emergency response

Satellite can act as a trustworthy fiber backup, since fiber is not that reliable. Faced with earthquakes or other cataclysms, both landline and fiber systems are easily down, unlike satellite-enabled communications, which become crucial for keeping in touch, monitoring and intervening in such moments.

 

Communication Satellites Architecture Options

In accordance with their type and with the communication needs they have to meet, satellites usually feature one of the following main architectural types, when it comes to data transmissions:

  1. Point to point links
    • When there is a limited number of links
    • For nb interface, for example
    • For very high speed trunks: +300 mbps
  1. Hub and Star architecture
  • Multiplex on forward and return
  • Smaller remote equipment

 

Mobile backhaul: key considerations

Mobile backhaul (MBH) is the process of data and voice transportation from distributed network sites to the network core. It enables mobile users to access the main data centers that host the content and applications.

Terrestrial backhaul is the traditional method. For a long while mobile and satellites existed in parallel, due to the costs involved in satellite operations. Once the more recent advances in satellite technology reduced the bandwidth costs – for example High Throughput Satellites (HTS) managed to bring an up to 70 % saving – satellite backhaul became a viable, attractive solution which is about to gain traction.

What does the shift towards satellite backhaul involve, as far as the mobile communications are concerned?

  • Mobile networks: Quality of Service (QoS) and Service Level Agreements (SLAs) are key for Voice traffic, Data traffic, Signaling, Management
  • The mobile equipment: configured to accept higher satellite delay
  • Mobile traffic: needs to be optimized to lower the cost of satellite bandwidth

 

Mobile traffic optimization

As mentioned above, traffic needs optimization, in order to make the most efficient use of the satellite bandwidth. To that avail, here are a few possible methods for each of the following telecommunication infrastructure types:

  • 2G TDM: remove silence and idle channels
    • Optimization on Abis can bring up to 50% bandwidth gain
    • Gains on Ater, A link and nb interface as well
  • 2G IP, 3G, 4G: leverage compression for headers (small packets), and payload (data)
  • VoLTE: compression of internal stacks as well (within GTP tunnel)

 

*Additional feature: accelerating TCP in 4G

  • TCP traffic  – captured transparently within GTP tunnel, by using “protocol spoofing”
  • A mechanism that tries to send as much data as possible, as soon as possible
  • The protocol uses the alternative window size mechanism, with a larger window – to increase throughput (RFC 1323)
  • The protocol implements alternative acknowledgment mechanism, reducing number of ACKs.

 

The importance and requirements of the satellite solution

Mobile network operators considered satellite backhaul as an attractive option for a long time. This is due to it being the only option in extreme situations, either permanent (remote or congested spots), or temporary (cataclysms, catastrophes, major downtime incidents). While it used to be a cost-prohibitive solution, the advances in technology now open up this option.

The new opportunities come with their own specific traits and requirements:

  • Spectral efficiency is essential in reducing the cost of satellite bandwidth
  • Quality of Service (QoS) is the key to being able to fulfil the mobile operators’ tight requirements
  • Flexibility is important. It accommodates the multiple mobile technologies (2G, 3G, 4G), and the various network configurations (including the rural low power consumption phenomenon)
  • The technology allows for bandwidth sharing and dynamic adjustment to real-time traffic and weather conditions
  • Scalability, which is critical for:
    • Typical mobile backhaul network: 20 to 100 remote sites
    • Typical small cell network: 1000 sites
    • Increasingly, mobile operators also want other services to be hosted on the same satellite solution, e.g. enterprise service, thus resulting more remotes

 

0

News

A couple of days ago, Microsoft launched a series of updates that benefit the developers. At their Ignite event (Orlando, FL), the company revealed the new and improved features in many of its product lines, in theme with their AI & ML focus for 2018.

We browsed TechCrunch’s report of the notable developer-centric updates.

 

For the Microsoft Azure Machine Learning services

The selection, testing and tweaking processes become mostly automated. This way the developers can save important time. They are also able to build, “without having to delve into the depths of TensorFlow, PyTorch or other AI frameworks.”

Also, more hardware-accelerated models for FPGAs will be available from now on.(*FPGAs – powerful field programmable gate arrays, used “for high-speed image classification and recognition scenarios on Azure”)

Microsoft decided to add a Python SDK to the mix, too, making Azure Machine Learning more accessible from a wider variety of devices. According to Microsoft, this SDK “integrates the Azure Machine Learning service with Python development environments including Visual Studio Code, PyCharm, Azure Databricks notebooks and Jupyter notebooks”. It also lights up a number of different features, such as “deep learning, which enables developers to build and train models faster with massive clusters of graphical processing units, or GPUs” and link into the FPGAs mentioned above.

 

For the Microsoft Azure Cognitive Services


The Microsoft speech service for speech recognition and translation also features an upgrade. The improvements are in terms of voice quality, as well as availability.

Apparently, now the voices generated by the Azure Cognitive Services’ deep learning-based speech synthesis are true to life.

 

For the Microsoft Bot Framework SDK

The company declared that now it’s much easier for any developer to build its first bot.

*However, TechCrunch’s comment for this reminds us that the bots hype is a thing of the past at the moment. Therefore, it remains to be seen how many developers will take advantage of the new features. Nevertheless, more natural human – computer interactions are now available on the Microsoft Bot Framework SDK.

 

The automated model selection and tuning of so-called hyperparameters that govern the performance of machine learning models that are part of automated machine learning will make AI development available to a broader set of Microsoft’s customer

– Eric Boyd, corporate vice president, AI Platform, Microsoft

Details here

0

News

ZDNet explores the way AI, the cloud, and Big Data in “the AI regeneration era”. The industry dubbed the adjacent infrastructure stack Industry 3.0. Due to the new generation of AI chips, data-centric software tasks should benefit in terms of both operational databases and analytics, as well as in what machine learning (ML) implies.

The easier way of handling Big Data can make companies of all sizes and from any location access new operational horizons. But organizing Big Data processing still involves a series of key decisions, which often call for experienced consultants. The pharmaceutical companies have a number of hard choices in the cloud and on-premise already laid out for them.

Bear in mind that the LASTING Software implementation of analytical algorithms and engines for statistical analysis is at the base of the world leading, FDA-approved solution. Our solution is being used by 93% of the world’s pharmaceutical companies. We will therefore walk you through the main expected pharma industry challenges, inspired by the article we mentioned.

 

The importance of processing units in accelerating your software workloads

It’s more exactly about GPUs – Graphical Processing Units, the ones that “leverage parallelism” and better keep up with Moore’s law. Their architecture responded to the new challenges well, and now one of the GPU main producers (NVIDIA), announced a set of innovative products with a new architecture.

Hardware to match the upgraded modern request is therefore on the way. But the software is also of importance. Seeing how “how GPUs are currently the AI chip of choice for ML workloads”, the ML libraries come into play.

For detailed recommendations, you may access the original article. What you need to remember is that “GPUs can greatly accelerate workloads that can be broken down in parts to be executed in parallel”. Enough said.

 

Field Programmable Gateway Arrays and their software scope

FPGAs, simplistically describable as “boards containing low-level chip fundamentals, such as AND and OR gates” are not new. Specific tasks or applications find their correspondence in the hardware description language (HDL) that specifies the FPGAs’ configuration.

Changing the said configuration at need suffers from a certain software layer immaturity. This time, the player that stands out is Intel. By investing into FPGAs R&D, this company tries to catch up on GPUs with a new line of next-gen FPGAs.

Again, the software is crucial. Along with it, the databases and libraries need to support the FPGA-accelerated analytics.  

 

Once having decided what you want, different choices ensue

To quote our inspirational article of the week: “Should you build your own infrastructure, or use the cloud? Should you wait until offerings become more mature, or jump onboard now and reap the early adopter benefits? Should you go for GPUs, or FPGAs? And then, which GPU or FPGA vendor?”

You may check some of the details and possible answers put forth by the author.

If you can make sense of them, or even get the big picture, then you must be familiar with both hardware and software – kudos to you.

Even so, the activity of your company might need the time to focus on different matters. You can still use a partnership where you can state what you need, infrastructure-wise, and your software solutions partner would deliver it.

Unable to follow the detailed options presented by ZDNet? Then the assumption that your company is a typical pharmaceutic industry entity could be the right one. No need to get stuck trying to learn software-specific notions or trying to make sense by yourself what would the best hardware elements be.

 

The multitude of available options pushes for the right partnerships. The wisest, poised for efficiency and success organizations learn to delegate tasks. To make next-gen digitization simple and get right to the point where you benefit from it, find the right software solutions partner.

We are waiting for your email or call!

0

PREVIOUS POSTSPage 1 of 3NO NEW POSTS