Technical Deployment of the IoTUK Project Platforms
This blog forms part of a series that looks at the technical experiences of three IoTUK projects that ran between 2016 and 2018.
Each of the three projects implemented their own IoT technology platform and in this blog we’ll explore how they architected these platforms. You will learn about the components used in the architecture and how the platforms were deployed.
CityVerve is the UK’s smart cities demonstrator, based in Manchester. CityVerve explored how IoT technologies and the forging of new relationships between the public and private sector can make a city truly smart. The resulting solution was a Platform of Platforms connected by a central portal that exposed data from different elements of the city. The data was open to 3rd party developers allowing them to create new and innovative apps.
Diabetes Digital Coach (DDC) is an NHS IoT Test Bed project and has been developed by consortium of 10 technology and evaluation partners, led by the West of England Academic Health Science Network (AHSN). DDC offers an online service to help people with Type 1 and Type 2 diabetes manage their condition and cut the risk of complications. It is available via computer, smart phone and tablet and brings together a number of digital self-management tools, which provide personalised support. When people sign up for an account with the Diabetes Digital Coach, they provide information about themselves, their health, lifestyle, and how they currently manage their diabetes. The Coach then suggests the most appropriate tools to suit their individual needs. The Coach features five tools which have been carefully selected by both healthcare professionals and people with diabetes; this ‘menu’ covers self-management education, dietitian support, optimising physical activity, insulin and glucose management, and a personal health record.
Technology Integrated Health Management (TIHM) is an NHS IoT test bed aiming to transform support for people with dementia and their carers. The collaboration involves partners from the health, voluntary and technology sectors. Each family in the trial was provided with a home technology pack that was suited to their particular needs. Like the DDC portal, TIHM connects different technology partners but here there is real-time correlation between different data points allowing insights to be gleaned. Any unusual signs are flagged to clinicians who will decide on the appropriate action to take.
The high-level overview of the interactions in the TIHM architecture is presented in Fig 1. As shown, it is composed of four main parts:
- Sensors installed in patient homes
- Back-end servers belonging to Health service companies (SMEs)
- TIHM backend system including the storage and analysis servers
- User interface for data visualisation and management.
The project aimed to automatically detect patient Agitation, Irritability and Aggression (AIA). Sensors were deployed in homes to collect data from the environment and surroundings, such as humidity and temperature conditions, appliance usage, etc. In addition, medical devices and wearable technologies are used to measure important physiological parameters, such as blood pressure, pulse, etc. All the sensor and medical devices record and typically send the data to their corresponding gateways over Wi-Fi or Bluetooth. Gateways relay the data to the SME backend systems over GPRS or home broadband. SME backends were hosted in clouds servers which were already being used by the companies. There was no requirement to consolidate the cloud deployments as this would have introduced addition delays and costs to the project.
The communication with the TIHM backend system was achieved with RabbitMQ using the Advanced Message Queuing Protocol (AMQP). This is the core exchange for TIHM messages and is able to receive, process, and reply to requests coming from the SME backends, TIHM backend system, and the user interface.
Once the data arrives at the backend, it is validated and persisted to a NoSQL database (Mongo) with nightly backups for further analysis. To make the analysed data more readily accessible and consumable by remote users (mainly the clinical monitoring team), a web-based graphical interface was implemented with defined privilege levels to access both real-time and historical data.
Since the data collected by TIHM is from real patients’ homes, privacy, security, ownership, duration of storage, and types of use were key concerns in this project. The platform servers were physically secure and compliant with NHS standards on secure data storage. For example, the communication with the clinical monitoring team is over the NHS’s private N3/HSCN network.
The DDC portal architecture is not too dissimilar to TIHM as a centralised portal collecting data from a number of technology publishers, although there is no requirement for correlating across multiple data sets. Data is not collected in real-time; instead the central portal pulls data from the individual publishers at scheduled times. Any streaming data (e.g. activity) is aggregated by publishers before it is sent as the project had no need for high resolution data. The data is stored in a database in the event that analytics becomes a requirement in the future.
The cloud environments are again geographically dispersed across all partner companies. The main portal was a custom development and hosted in the UK on a Microsoft environment. There is an intrusion detection system in use and addition enterprise applications were developed to assist with management and monitoring of the environment.
As with the other two projects, technology partners at CityVerve collected data using their existing infrastructure but where CityVerve differed was making this data accessible to authorised 3rd parties. Independent developers can register to use the portal and all user management is performed in the central portal.
Data producers from across the city make their data discoverable by advertising it in catalogues. The catalogues were hosted in the central portal which was jointly built by the consortium partners.
The central environment at CityVerve was built and deployed on common web technologies:
- WildFly for the App Server
- NGINX for managing user requests from the web
- WordPress for the developer portal
- A variety of databases were used by publishers
- Hypercat for managing the publication of open data. CKAN was also inherited from partner legacy solutions but is now being migrated to Hypercat as the standard in the project.
The main portal is hosted on UNIX based systems across Amazon Web Services and BT cloud services. Unlike the TIHM and DDC solutions, very little data flows from the partner companies to the central portal. Instead, Hypercat is used to allow developers to discover sources of data from across Manchester. Developers then access the data directly from the publishers. The resulting solution is stateless and horizontally scalable.
Opening up the city data to 3rd party developers means access control to the APIs must be managed carefully. API management was achieved using a solution from the UK company TYK. API management can be provided by cloud hosting services but CityVerve wanted a solution that was independent of the cloud provider for the purposes of portability.
Common patterns across the IoTUK projects
None of the three projects built their solutions on commercial or open-source IoT platforms. The advantage of doing this was that they had complete flexibility in the design of the solution and the ability to port the solution to other clouds. However, the time taken to design, build and deploy the solution took longer than would have been the case for purpose built IoT platforms.
The bespoke environments were built using mostly open source web technologies. The teams were responsible for selecting the right technology component for each requirement of the project. In hindsight, some of the teams realised they had spent too much time evaluating technology options. If the project was to be repeated they would spend less time selecting components and focus more on non-functional requirements such as testing the solution.
In the next blog in this series, we’ll take a closer look at the connectivity between the key parts of the test bed solutions.
You can read the first one here.