Republicii Street, No. 9, TM, Romania +40-256-201279
Tech

City planners and technologists joined forces at Smart Cities Week 2018 in an exercise of simulation for businesses. The theme was the future the smart city, which is not at all far-fetched. All around the globe initiatives for intelligent, connected IoT-based urban centers are in various stages of development.

To understand the full scale of requirements, challenges and multiple tasks involved by such a mega structure, we browsed a recap of the simulation, as featured on IoT for All.

 

Things to expect when involved in a smart city project

Although the increased specialization in the technical field over the years may not equally put all participants in a project in a similar strategic posture, certain major challenges are bound to affect all. Even if layered in complementary projects, micro projects and various team types, the professionals who are part of the overall team do have to acknowledge the bigger picture.

What would a few of the common challenges might be? Lateral thinking is one of them. For the success of such a venture, those who make decisions, as well as those who decide for their own subsidiary area of responsibility need to bring an indirect and creative approach to the table.

Balancing the budget against what are indeed the much needed strategic investments is yet another must. Investing in new technologies blends together with retrofitting older structures. However, blending means seamless integration. The participants need to think things through. They also need to have a can do attitude, to grasp the big picture, as well as be quickly aware of the possible pressure or blockage points.

 

A process of bartering and flexible adaptation

The simulation in cause was extremely interesting due to the fact that it reunited city planners with the specialist in technology.  While the professionals in urban planning brought an approach forged during the specifics of their everyday activity, some of the technologists had their first opportunity to see how and why this type of project is different.

City planners are often reluctant to changes, stick by their budgets and need to understand the technology and translate it into financial lines. Understanding the technology alone took a while for some of the participants. The technologists had to meet them halfway in relation with certain issues.

Collaboration and strategy are key on the road to a functional smart city – as the author repeatedly underlines it. The aim and the measurement elements is the citizen quality of life – a notion that seems at hand, but needs defining. It also requires a sustained activity on the line of planning, so it would indeed materialize into better services, better living conditions and so on.

 

Keywords for activities related to the smart cities of the future

We selected a few keywords that might not be the first things that come to mind when contemplating the idea of smart city:

  • Multiple stakeholders
  • Constrained budgets
  • Challenging social dynamics

Other keywords are less surprising:

  • Economic growth
  • Intuitive infrastructure
  • Public-Private Partnerships
  • Collaborative processes

Materializing smart city projects may involve a sum of integrations. Some software solution can be adopted into such a project, while they previously existed in the private environment. Others can regard custom-made software programs and applications.

Whatever the case, having the right partner in software engineering is of great use. Our company is an example of such a partner – with the experience accumulated in our projects along the years, we can tackle IoT and mobile challenges related to the major ventures of the future.

Image credit

0

News

Recently, our colleague Petrut Ionel Romeo, Technical Project Manager, Lasting Software held the presentation entitled Satellite Communication solution for Cellular backhaul, as part of Codecamp Timişoara, the autumn edition.

In what is actually a solid business case for satellite – based mobile communications, he outlined the importance of this technology and its 2 main current key purposes, in a way that would make it clear and palatable by non-specialists and technical people altogether.

We will briefly recap the main ideas, leaving aside the more technical content. Nevertheless, let your imagination add to this material the context of a busy, dynamic conference, and you will have the image of a great day…

 

Satellite – related facts

Satellites (communication and weather), are situated in the geostationary orbit. This is to make it easier for the Earth-based antennae to maintain continuous communication. A geostationary orbit allows for the position of the orbiting object to remain constant.  To a stationary observer from Earth, it seems motionless.

In time, the number of satellites placed in the geostationary orbit became high enough to justify the new ring system made by all these devices being called the Geostationary Satellite Belt.

Having to control and maintain sophisticated pieces of technology at such a remote distance is a huge task. There are specific challenges linked to the system of satellites surrounding the Earth. Although by developing the technology at an impressive rate, the previous reach, maneuver and management issues have been solved in ways that were not possible a while ago, the physical premises (and not only) make satellite technology a highly demanding field.

Satellites operate in extreme thermal conditions, risk being hit by space debris, face the hostility of radiation belts and so on. Perhaps the simplest challenge to understand of them all would be that satellites are remote devices, with a specific procedure in place in order to access, maintain and repair.

By taking a look at the distance-related communication challenges, we may consider how:

  • The Clarke belt (the part of space in the plane of the equator, designated implementation area for near-geostationary orbits) is about 36.000km above sea level;
  • When speaking of communication back and forth, we are dealing with 72000km
  • Taking into consideration the speed of light, the resulting delay in transmissions is of approximately 250-270ms

However, despite these challenges, communication satellites also have a few particular traits that put them above any other communication-enabling technology located on Earth.

The unmatchable advantages of the communication satellites

Satellites are extremely important because in certain locations and/or circumstances, the terrestrial communication infrastructure is inefficient or even completely out of range/function.

There are 2 major (and we may add critical) situations where only satellite-enabled communications can support the flow of necessary data and messages.

  1. Geographically-isolated or architecturally-isolated spots
  • Often in rural and remote areas fiber and microwave transmissions are unavailable, as there is no business interest
  • Satellite communications are also efficient in mountains, deserts, islands, in other areas difficult to reach due to the landforms and/or structure
  • This type of communications may be of use in order to relieve congested urban areas: stadiums, malls, markets, academic centers
  1. Situations that require a solid communication infrastructure for emergency response

Satellite can act as a trustworthy fiber backup, since fiber is not that reliable. Faced with earthquakes or other cataclysms, both landline and fiber systems are easily down, unlike satellite-enabled communications, which become crucial for keeping in touch, monitoring and intervening in such moments.

 

Communication Satellites Architecture Options

In accordance with their type and with the communication needs they have to meet, satellites usually feature one of the following main architectural types, when it comes to data transmissions:

  1. Point to point links
    • When there is a limited number of links
    • For nb interface, for example
    • For very high speed trunks: +300 mbps
  1. Hub and Star architecture
  • Multiplex on forward and return
  • Smaller remote equipment

 

Mobile backhaul: key considerations

Mobile backhaul (MBH) is the process of data and voice transportation from distributed network sites to the network core. It enables mobile users to access the main data centers that host the content and applications.

Terrestrial backhaul is the traditional method. For a long while mobile and satellites existed in parallel, due to the costs involved in satellite operations. Once the more recent advances in satellite technology reduced the bandwidth costs – for example High Throughput Satellites (HTS) managed to bring an up to 70 % saving – satellite backhaul became a viable, attractive solution which is about to gain traction.

What does the shift towards satellite backhaul involve, as far as the mobile communications are concerned?

  • Mobile networks: Quality of Service (QoS) and Service Level Agreements (SLAs) are key for Voice traffic, Data traffic, Signaling, Management
  • The mobile equipment: configured to accept higher satellite delay
  • Mobile traffic: needs to be optimized to lower the cost of satellite bandwidth

 

Mobile traffic optimization

As mentioned above, traffic needs optimization, in order to make the most efficient use of the satellite bandwidth. To that avail, here are a few possible methods for each of the following telecommunication infrastructure types:

  • 2G TDM: remove silence and idle channels
    • Optimization on Abis can bring up to 50% bandwidth gain
    • Gains on Ater, A link and nb interface as well
  • 2G IP, 3G, 4G: leverage compression for headers (small packets), and payload (data)
  • VoLTE: compression of internal stacks as well (within GTP tunnel)

 

*Additional feature: accelerating TCP in 4G

  • TCP traffic  – captured transparently within GTP tunnel, by using “protocol spoofing”
  • A mechanism that tries to send as much data as possible, as soon as possible
  • The protocol uses the alternative window size mechanism, with a larger window – to increase throughput (RFC 1323)
  • The protocol implements alternative acknowledgment mechanism, reducing number of ACKs.

 

The importance and requirements of the satellite solution

Mobile network operators considered satellite backhaul as an attractive option for a long time. This is due to it being the only option in extreme situations, either permanent (remote or congested spots), or temporary (cataclysms, catastrophes, major downtime incidents). While it used to be a cost-prohibitive solution, the advances in technology now open up this option.

The new opportunities come with their own specific traits and requirements:

  • Spectral efficiency is essential in reducing the cost of satellite bandwidth
  • Quality of Service (QoS) is the key to being able to fulfil the mobile operators’ tight requirements
  • Flexibility is important. It accommodates the multiple mobile technologies (2G, 3G, 4G), and the various network configurations (including the rural low power consumption phenomenon)
  • The technology allows for bandwidth sharing and dynamic adjustment to real-time traffic and weather conditions
  • Scalability, which is critical for:
    • Typical mobile backhaul network: 20 to 100 remote sites
    • Typical small cell network: 1000 sites
    • Increasingly, mobile operators also want other services to be hosted on the same satellite solution, e.g. enterprise service, thus resulting more remotes

 

0

Tech

Starting from its own report, Automation World details this week on a phenomenon of interest for all owners and developers, whose activities are linked to the Industrial Internet of Things (IIoT). In what the online publication deems a key fact, it seems that

Manufacturers are mapping out a route that covers both (cloud & edge), each an integral on-ramp for new predictive maintenance and performance monitoring applications at different stops on the journey.

 

Where is IIoT taking place, according to the AW report?

Not so much taking place, but evolving, operating, being deployed with all its features, data flow, stream of communications – of course. However one would call it, here is what the numbers say:

  • 43 % of the respondents revealed they have edge computing implementations underway;
  • 51 % of the respondents already use cloud computing in their IIoT implementations
  • 20 % of the respondents mentioned fog computing, “ a superset of edge computing used to bridge the traditional operations (OT) world with enterprise IT” – according to AW
  • 29 % of the respondents marked their answer under the “none of the above section”.

While some of you may be momentarily surprised by the over 100 % total, this is due to the fact that the survey respondents could pick more than one answer.

 

The user’s maturity degree in using edge and cloud computing in IIoT, according to the same study

The angle here is that the amount of time since starting to use either one of the options, or both, entitles the users to a more firm and valid verdict when it comes to comparing the mediums they employ for IIoT.

  • 48 % of the users employ these technologies for more than 3 years
  • 39 % of the users hold an experience of over 2 years in edge/fog or cloud computing technology
  • 12 % of the respondents are still at the beginning with using these above-mentioned technologies

 

Which environment is predominant?

When asked which their most preferred path for production/manufacturing data analysis applications is, the survey respondents placed their preferences as such:

  • 52 % cited edge computing
  • under 20 % mentioned equipment data analytics—for capacity or overall equipment effectiveness

In what their “preferred paradigm for enterprise effectiveness analysis” is concerned,

  • 5 % chose cloud computing
  • 18 % went for edge computing
  • 27 % mentioned fog computing

 

The relationship between the types of IIoT deployment and the overall benefits

Since their experience with one or the other of the deployment environments is around 2-3 years, the surveyed companies could provide an image of the overall benefits:

  • 50 % of the responding companies vouched for significant reductions in downtime
  • 38 % mentioned “measurable improvements to production output”
  • 37 % witnessed a considerable profitability increase
  • 30 % mentioned a decrease in the production costs

A typical rollout pattern involves a cloud – edge – cloud again (mixed version) for the IIoT deployment operations.

For other relevant numbers, as well as for perspectives in the related hardware field, you may access the source Automated World article here.

 

 

 

 

0

News

A couple of days ago, Microsoft launched a series of updates that benefit the developers. At their Ignite event (Orlando, FL), the company revealed the new and improved features in many of its product lines, in theme with their AI & ML focus for 2018.

We browsed TechCrunch’s report of the notable developer-centric updates.

 

For the Microsoft Azure Machine Learning services

The selection, testing and tweaking processes become mostly automated. This way the developers can save important time. They are also able to build, “without having to delve into the depths of TensorFlow, PyTorch or other AI frameworks.”

Also, more hardware-accelerated models for FPGAs will be available from now on.(*FPGAs – powerful field programmable gate arrays, used “for high-speed image classification and recognition scenarios on Azure”)

Microsoft decided to add a Python SDK to the mix, too, making Azure Machine Learning more accessible from a wider variety of devices. According to Microsoft, this SDK “integrates the Azure Machine Learning service with Python development environments including Visual Studio Code, PyCharm, Azure Databricks notebooks and Jupyter notebooks”. It also lights up a number of different features, such as “deep learning, which enables developers to build and train models faster with massive clusters of graphical processing units, or GPUs” and link into the FPGAs mentioned above.

 

For the Microsoft Azure Cognitive Services


The Microsoft speech service for speech recognition and translation also features an upgrade. The improvements are in terms of voice quality, as well as availability.

Apparently, now the voices generated by the Azure Cognitive Services’ deep learning-based speech synthesis are true to life.

 

For the Microsoft Bot Framework SDK

The company declared that now it’s much easier for any developer to build its first bot.

*However, TechCrunch’s comment for this reminds us that the bots hype is a thing of the past at the moment. Therefore, it remains to be seen how many developers will take advantage of the new features. Nevertheless, more natural human – computer interactions are now available on the Microsoft Bot Framework SDK.

 

The automated model selection and tuning of so-called hyperparameters that govern the performance of machine learning models that are part of automated machine learning will make AI development available to a broader set of Microsoft’s customer

– Eric Boyd, corporate vice president, AI Platform, Microsoft

Details here

0

Tech

The LASTING Software Big Data experience spans from pharmaceutical industry projects, to the business software and consultancy collaborations we are part of. Digital data is our field of action.

While auditing companies and being part of the preliminary consultancy stage, one can notice in time how the needs and requirements of each partner vary, even if the ultimate goals bring us all together. Staying ahead of the competition, getting a firm grasp on the industrial revolution phenomenon and keeping or making their organizations successful are all common traits for companies, even for industries.

Reaching these goals allows for a wide range of software solutions to come into play. The best solution for you is the one that provides the most suitable, actionable answer to your current needs. Being scalable is a big extra, because what you invest in now will also deliver tomorrow, the next year or in 5 years. Provided you specify this in the consultancy & design phase, a sustainable software solution should always be an integrated scope.

On this line, we ran into an article that deconstructs the Big Data needs – and we want to share a few ideas out of it with you.

 

 

Operational data, a preliminary step to Big Data

While Big Data involves a great deal of external data purchasing, operational data is internal. The ReadWrite article that caught our attention underlines the importance of harnessing this type of data, before making the leap for Big Data.

Mining data for actionable information requires attentive management, accurate analysis, and continuous adjustment, and buying software and raw data doesn’t provide companies with the skills necessary to master these processes overnight”.

In other words, the companies that are new to this need guidance. So do those that have reached the conclusion that something needs to change for the better in the way they do things.

This, as well as trying your hand at the data game by processing operational data, could both use the right partnerships.

The consultants in this field put their know-how and experience at your disposal. Once you successfully deploy operational data solutions, it may be time for big data, or not. Often the optimal solution combines both of the types of data. It all depends on the specifics of your activity.

 

The smaller-scale solutions for digital data, between creativity and customization

Let’s say your organization has a certain operational need. The answer takes the shape of a potential project. Creativity points out a certain type of solution, perhaps even a completely innovative one. When further going into the matter, some aspects become limited by technical elements. You customize down, so to say. What you imagined at a first glance can change. It happens, once confronted with budgeting, resources, compatibility, and project duration or other reasonable expectations.

When done in-house, this stage can be too harsh, or in some cases, not harsh enough. Both are dangerous, because instead of getting things done the right way, you would either settle for a more limited solution, or delay the problem solving by chasing a less realistic projection. By partnering with a software solutions company, you can test these “could be”-s against a backdrop of hands-on experience, industry know how and updated knowledge in the software solutions field.

The second stage of customization can take place when already having a certain skilled software partner. Your operational need can prove one that is familiar across your industry or field. Then the answer to it need not start from scratch. Building upon an already validated solution involves customizing up and the team creates and/or activates new features. Cost-saving and faster, this way of getting a personalized solution is perhaps the most frequent nowadays.

Contact LASTING Software, let’s talk about this and see where it takes us! We are here for your projects!

 

0

PREVIOUS POSTSPage 2 of 8NEXT POSTS