Friday, 30 September 2016

Synopsys looks to address IoT security concerns with latest secure processors

Unknown
As the Internet of Things develops, security is becoming increasingly important, particularly for so called edge devices. Typically, these have been MCU based and vulnerable to attacks.


Looking to address the problem, Synopsys has launched the DesignWare ARC SEM110 and SEM120D security processors, said by the company to protect systems against software, hardware and side channel attacks, as allowing designers to separate secure and non secure functions as part of a Trusted Execution Environment (TEE).

Angela Raucher, ARC EM product line manager, said: “Because a lot of companies designing IoT edge nodes are concerned about power, performance and area (PPA), they want to run cryptography in software, rather than hardware, so the device needs to be protected. However, security can’t be solved at the expense of PPA because developers want to provide long battery life.

“Threats can be seen at the network, chip and device levels and the key to the solution is to put in more layers of protection.”

The SEM processor cores are based on the 32bit ARCv2 instruction set architecture (ISA) and optimised for area and power efficiency. The SEM110 integrates a range of security technologies and can be implemented in an SoC as either a standalone secure core or as a single core performing secure and non-secure functions. The SEM120D adds DSP functionality, catering for applications such as sensor processing and voice identification in health care and IoT devices.

Key features of the devices include: resistance to side channel attacks; an enhanced memory protection unit, a tamper resistant pipeline with

instruction and data encryption, as well as address scrambling; and a watchdog timer to detect system failures, including tampering.

Technology leaders are jostling to capture their share of the emerging market for computer vision

Unknown
Virtual reality technology is nothing new; the first devices aiming to give users the ability to enter ‘another world’ appeared more than 20 years ago. But a combination of the electronics technology not being quite up to the job and a lack of sensible applications saw VR put back on the shelf, although it continued to have niche uses.


It wasn’t until the 2016 Consumer Electronics Show that VR burst back on the scene, backed by a range of big names. Perhaps the one thing which gave VR its new momentum was the acquisition by Facebook of Oculus Rift in 2014. Since then, technology companies appear to have been fighting each other for leadership in the field.

What has also emerged as a viable technology is augmented reality, or AR, where the view of the ‘real world’ is supplemented by computer generated imagery. Looking to develop a niche for itself, Intel has begun to refer to AR as ‘merged reality’.

Meanwhile, a similar technology has been developing quite nicely – gesture recognition. From the starting point of touchscreens, gesture recognition has moved into the third dimension and is an area in which Intel has shown great interest, launching its Perceptual Computing platform, recently renamed as RealSense. Showing its enthusiasm for VR, Intel has launched Project Alloy, which it describes as an all in one VR solution which deploys RealSense technologies optimised for VR.

Build (and Crash) Your Own Wacky Lego Drones with Flybrix

Unknown
Flying a drone is fun, but building one is rewarding, and it isn't just for hardware nerds and drone racers anymore. The new Flybrix kit, designed by a bunch of MIT, Caltech, and UW Madison alumni, lets anyone get in on the fun of designing and building a multi-prop flier by using Lego as the foundation.
The kits, which start at $150, come with everything you need to make a craft that can actually fly: propeller arms, a motor, a battery, and a chip to control it all. The cheapest pack has you use your smartphone as a Bluetooth controller, though the deluxe $190 kit comes with a more traditional dual-stick controller.

DARPA's 'Aerial Dragnet' Will Monitor Drones in Cities

Unknown
While air traffic control systems track, guide and monitor thousands of planes and helicopters every day, one group of sky flyers remains unmonitored: drones.
In recent years, small unmanned aerial vehicles (UAVs), such as commercial quadcopters and hobby drones, have become less expensive and easier to fly — adding traffic to airspace that's already congested. Drones are also more adaptable for terrorist or military purposes, and because they are currently flying unmonitored, U.S. forces want to be able to quickly detect and identify UAVs, especially in urban areas.
A new project launched by the Defense Advanced Research Projects Agency (DARPA), the Pentagon's research arm, wants to map all small drone activity in urban settings. Managers of the Aerial Dragnet program are soliciting proposals to help the military provide continuous surveillance of drones on a city-wide scale. [Humanoid Robots to Flying Cars: 10 Coolest DARPA Technologies]

The capacity for storage is fast becoming a vital component of smart city infrastructure

Unknown
Automated road traffic management systems, surveillance systems and storage for the vast amount of data being generated will make up a core part of the future smart city infrastructure. Much of this information is sensitive so the storage technology used needs to be the best available.


Understanding how storage works and how end users can access it is now the focus for storage companies as they look to facilitate the adoption of cloud-based data centres.

As smart cities grow, video surveillance is set to become a key target for storage companies over the next few years and as the cost of network video surveillance cameras drops below $100 so the cost of implementation will fall. As a result, the most difficult task for storage companies is ensuring the cost of storage or cost per gigabyte falls in-line with this while maintaining high standards of performance, compliance and security over those data management and storage systems.

There is an accelerated demand for security cameras in cities around the world and, according to a recent IHS report, the global market for video surveillance will grow by 7% in 2016 with some 66 million network cameras shipped globally, of which 28 million will have high definition capabilities. This proliferation of security cameras is set to see one camera deployed for every 14 people in London and body-worn cameras being used in Beijing as part of its Metro project. Overall, the video surveillance industry is set to grow at a Compound Annual Growth Rate (CAGR) of 10% to $24.2 billion by 2019.

This increasing demand subsequently means the need for RAW capacity is going to increase (RAW is a file format, much the same as a JPEG). Surveys suggests that the total RAW capacity of enterprise storage use for video surveillance is set to increase by 48 per cent this year, and while this demand doesn’t just affect the cost per gigabyte and required performance of storage, flexibility is also a necessity for data centre providers. They need to service an oscillating consumer audience demanding open API access for seamless integration into a plethora of technology systems and integrated services – and provide end-to-end security too.

This demand for open API access also comes with an expectation of performance quality and a high IOPS desirability. Demands which mean that it is essential that storage operates with Edge-Core architecture, typically used in flash, which uses an Edge filer or Edge filer clusters to provide low-latency data writes and reads - rather than using traditional NAS architecture which has much greater latency and requires data to be transferred through a network to a filer and then to repositories and back again.

Video segments

Understanding the converging requirements for Hardware and Software integration behind video storage is crucial for companies in this space. Diversity of end point devices with cloud providers having to cater for a wide range of applications means a comprehensive understanding of front-end processes and capability needs.

Video surveillance storage hardware is split into two segments. One: Hardware Network Video Recorders (NVRs) with preconfigured Video management software (VMS) and storage – and two: Enterprise Storage (storage with no VMS). VMS is software that runs on IT hardware with the functionality to manage and record digital video streams directly from an IP-network on to storage devices.

While traditionally storage organisations would have focussed solely on the storage of video, evolving infrastructure requirements mean they have to take front end hardware and cloud applications into consideration when designing storage for video segments.

City surveillance, airports and government dominate enterprise video surveillance storage spending globally, with over 50 per cent of the market accounted for by these purchasers, and end point applications can vary from body-worn cameras to citywide surveillance. Due to the nature of surveillance storage, manufacturers need to focus on keeping the Total Cost of Ownership (TCO) as low as possible due to the exponential growth of data captured and key requirements for storing data over extended periods of time. It is critical storage providers pay attention to the architecture, capacity and overall deployment of the storage systems, particularly in cloud storage, facilitating scalable models with robust systems that provide clients with longevity of use and performance throughout the product lifecycle.

Creators of cloud-based storage must also begin to build storage with space limitation in mind. Video surveillance footage needs to be stored for much longer periods of time than traditional data and the high memory requirements mean that physical storage space must be a consideration for manufacturers designing storage for data centres. Reducing rack space reduces TCO of enterprise storage bringing down operational costs for data centre owners.

How the project manager and their team can get to where they want to be

Unknown
If you’re not managing projects today, it may not be too long before the phone rings and the boss passes on the good news that you have been chosen to drive the company’s next project. So what might you expect?


Simon Naylor, head of consumer and industrial project management with Cambridge Consultants, said a project manager’s involvement can start at many points – from a blank sheet of paper to a mature design. “But what determines the success, or otherwise, of your project frequently isn’t the technical side,” he warned. “It often comes down to the project’s stakeholders, its leadership team and your ability to control things.”

One of the first things a project manager needs to understand is just who has an interest. “There are always stakeholders external to the project team,” he said. “When I first discuss a project, I hope there are two or three key stakeholders and there is access to them. Fewer than this and the chances are there are stakeholders we don’t know about. In this situation, the project is destined for an unpleasant realignment when missing stakeholders reveal themselves.”

Too many, Naylor added, and you could be in for a difficult time. “You could end up with something ‘designed by committee’ because you can’t make the necessary decisions.”

You should also hope that stakeholders talk to each other and have complementary views and aims. And one should have the authority to make quick and lasting decisions.

Before the project kicks off, talk to the stakeholders, understand who they are and what’s driving their involvement. “If there’s uncertainty, that will be an area of risk and it’s worth investing time to work out the relationships.

“As a project manager, you want to have control over what you’re promising to deliver. The worst case is when there are fixed timescales, development effort and deliverables – for the typical project, that’s a ‘car crash’ waiting to happen.”

But a project manager doesn’t have to be responsible for everything. “They will be responsible for the timescales,” Naylor pointed out, “and the deliverables. But, ideally, they will have a ‘partner in crime’ – the technical authority or lead – who will be responsible for the technical stuff.”

His analogy is a rally car. The project manager is the driver, with the technical authority navigating. “Between you, you work out the best way of getting to where you need to be.”

Another area of risk is a fixed specification. “Specs are things that are hard to make 100% complete and are prone to interpretation differences. It’s an important document and something you need the ability to work on.

“Think of it as a discussion document; one of its purposes is to extract information from stakeholders and to then get rid of possible misunderstandings.”

If you’re handed a complete spec, he added, it might be OK. “But make sure you have control over timescales and costs,” he advised. “However, when things are written down and you weren’t involved, that represents a danger area – you won’t understand why they were written down. In reality, the world is grey scale, rather than black and white, so if you have worked on the spec, you will have a better understanding of what’s really needed.”

Any project leadership team worth its salt should identify the high risk areas. “Pick them out and prototype them,” Naylor said, “or check them against the Laws of Physics to convince yourself the project can proceed.”

There are three important elements to project definition – commercial, technical and management. “All three should be represented,” Naylor asserted. “Commercial people want to sell something, technical people push functionality, while the project manager wants to deliver something. There will always be tension between them.”

Project management triangle

Naylor said there are three elements to project management and these can be considered as a triangle. “They are cost, deliverables and timescales,” he explained. “You can’t have all three; there has to be room for manoeuvre. Quality is a further issue, particularly if you’re developing a medical device. Then, the area of the triangle will become larger. You have to be able to trade these elements off against each other; if they’re fixed, then that’s a big risk.”

Project managers have to perform this trade-off constantly, Naylor continued. “If not, then it will be hard to deliver the project. But the priority at any point depends what’s driving the stakeholders. You might need to show something at an exhibition; that’s a fixed date, so development effort or functionality will need to be flexible. Remember that things change and the project will need continual discussion.”

A European project aims to enable a ‘new era’ of reconfigurable multicore devices

Unknown
Efforts to produce an efficient way of linking those programming in C with the ability to create hardware have been under way for many years. One of the first tools generated to support this was created in the early 1990s in Oxford University’s Computing Laboratory, where Ian Page developed Handel-C as means of bridging between C and silicon.


The interest generated by Handel-C saw the software commercialised by Embedded Solutions, later to become Celoxica. Handel-C, meanwhile, was bought and sold and is now part of Mentor Graphics’ DK Design Suite. Mentor says this software, which supports code being compiled directly into FPGA logic, is suitable for those using C and C++, but who don’t have much hardware design experience.

While efforts have been made to develop C to hardware technologies, there has also been work on developing C to multicore systems, recognising the growing importance of the latter technology in embedded systems.

One of the biggest European projects to date is bringing together something like 100 partners to work on the challenge. Called EMC² – ‘Embedded Multi-Core systems for Mixed Criticality applications in dynamic and changeable real-time environments’ – the ARTEMIS Joint Undertaking project has a budget of some €100million.

Flemming Christensen, managing director of Sundance Multiprocessor Technology, said: “EMC² is massive and one of the EU’s biggest funded projects.”

According to Christensen, EMC² ‘wants to make an impact’. “It’s addressing areas such as embedded computing, the IoT and the things that go around them. The focus is on multicore and on time critical applications, where things cannot go wrong.” And the EMC² website highlights this, noting ‘the objective of EMC² is to establish multicore technology in all relevant embedded systems domains’.

According to EMC², the development of multicore hardware architectures and concepts with partial reconfiguration will open up what it calls a ‘new era’ of time multiplexed hardware coprocessors. These devices, which are expected to reduce power consumption and improve efficiency, have the potential to enable reconfigurable multicore processors, it contends.

“EMC² is also about making multicore systems safe,” Christensen noted, “because these types of device will eventually be used in embedded computing. In short, it’s about making sequential applications into multicore applications.”

He estimated that some 95% of embedded solutions feature a single CPU, but pointed out that almost all new processors are multicore. In a complex system, such as a car, there might be more than 100 CPUs. “Wouldn’t it be better if fewer multicore chips could be used?” he wondered. And that multicore device might not just feature CPU cores. “EMC²’s goal is ‘generic’ multicore; it could be a number of CPUs, with hardware accelerators alongside,” he continued. “So while the C to VHDL aspect of EMC²’s work is relevant to FPGA users, there is also a significant element that is exploring how to map software to multiple CPUs.”

In Christensen’s opinion, this latter aspect remains the ‘Holy Grail’ of embedded software development. “It might be 20 years before the multicore dilemma is solved, maybe longer.”

Sundance has a long history of developing C to VHDL hardware and software and has a range of products for high performance embedded processing applications. Initially, it designed devices for the parallel processing market, but has since broadened its scope to include PC add-in boards and modules. Most of its products are based on TI’s TMS320C6x processors or on Xilinx’ Virtex FPGAs.