Festival

In part one of this series we looked at the physical network, part two covered the logical network and now in the third and final part we reach the edge network. Everything that has gone before is purely to enable the users and devices which connect to the network to deliver a service. For this blog we’ll take a journey through the different user groups and look at how the network services their requirements and the way technology is changing events.

Event Production

Making everything tick along from the first day of build until the last day of derig is a team of dedicated production staff working no matter whatever the weather. It is perhaps obvious that they all need internet access but the breadth of requirements increases year on year. Email and web browsing is only a part of the demand with applications such as cloud based collaboration tools sharing CAD designs and site layouts, along with event management applications dealing with staff, volunteers, traders, suppliers and contractors all being part of the wider consumption of bandwidth.

Just about everything to do with the delivery of an event these days is done in a connected way and as such reliable connectivity is as important as power and water.

Across the site, indoors and outdoors are carefully positioned high capacity Wi-Fi access points delivering 2.4GHz and 5GHz wireless connectivity to all the key areas such as site production, technical production, stewarding, security, gates and box offices. Different Wi-Fi networks service different users – from encrypted and authenticated production networks to open public networks – each managed with specific speeds and priorities. To deliver a good experience to the high density of users’ careful wireless spectrum management is essential, in some cases using directional antennas to focus the Wi-Fi signal in specific directions (rather like using a torch to focus light in a specific area). With so many wireless systems used on event sites interference can be a real challenge so wireless scanners are used to look for potential problems with active management and control used to make sure there are no ‘rogues’.

Not everything is wireless though, many devices, such as VoIP phones and some users require a wired connection so many cabins have to be wired to from network switches. Some sites may have over 200 VoIP (Voice over IP) phones providing lines for aspects such as enquiries, complaints, box offices, emergency services as well as a reliable communications network where there is no mobile service or the service struggles once attendees arrive. Temporary cabins play host to array of IT equipment such as printers, plotters and file servers all of which need to be connected.

As equipment evolves more and more devices are becoming network enabled, for example power is a big part of the site production with an array of generators across the site. The criticality is such that a modern generator can be hooked into the network like any other device to be monitored and managed remotely. On big sites even the 2-way radios may be relayed between transmitters across the IP network. Technical production teams also use the network to test the sound levels & EQ from different places.

Event Control

Once an event is running it is event control that becomes the hub of all activity. Alongside laptops, iPads and phones, large screens display live CCTV image from around the site – anywhere from two to over a hundred cameras may be sending in high definition video streams with operators controlling the PTZ (Pan/Tilt/Zoom) functionality as they deal with incidents. A modern PTZ camera provides an incredible level of detail with a high optical zoom, image stabilisation, motion detection and tracking, picture enhancement and low light/infra-red capability. CCTV may be thought of as intrusive but at events its role is very broad playing as much a part in monitoring crowd flows, traffic management and locating lost children as it is in assisting with crime prevention.

Mast & Cameras

These cameras may be 30m up but they can deliver incredibly detailed images across a wide area

Full-HD and 4K Ultra HD cameras can deliver video streams upwards of 10Mbps, with 360 degree panoramic cameras reaching 25Mbps depending on frame rate and quality, this creates many terabytes of data which has to be archived ready to be used as evidence if needed, requiring high capacity servers to both record and stream the content to viewers. One event this year created over 12TB of data – the equivalent of 2,615 DVDs!

As everything is digital, playback is immediate allowing incidents to be quickly identified and footage or photos to be distributed in minutes. Content is not only displayed in a main control room but is also available on mobile devices both on the site and at additional remote locations.

Special cameras provide additional features such as Automatic Number Plate Recognition (ANPR) for use at vehicle entrances or people counting capability to assist with crowd management. Body cameras are becoming more common and now drone cameras are starting to play a part.

At the gates staff are busy scanning tickets or wristbands, checking for validity and duplication in real-time across the network back to central servers. The entrance data feeds to event control so they can see how many people have entered so far and where queues may be building. Charts show whether flow is increasing or decreasing so that staff can be allocated as needed.

For music events especially, noise monitoring is important and this often requires real-time noise levels to be reported across the network from monitors placed outside the perimeter of the event. Other monitors are increasingly important, ranging from wind-speed to water levels in ‘bladders’ used for storing water on site. The advent of cheap GPS trackers is also facilitating better monitoring of large plant and key staff.

External information is also important for event control with live information required on weather, transport, news and increasingly social media. Sources such as Twitter and Facebook are scanned for relevant posts – anything from complaints about toilets to potential trouble spots.

Bars, Catering, Traders & Exhibitors

For those at an event selling anything from beer to hammocks, electronic payment systems have been one of the biggest growth areas. From more traditional EPOS (Electronic Point of Sales) systems through to chip & pin/contactless PDQs, Apple Pay, iZettle and other non-cash based solutions. These systems are particularly critical in nature transacting many hundreds of thousands of pounds during an event with some sites deploying hundreds of terminals.

High volume sales such as bars also require stock management systems linking both onsite and offsite distribution to ensure stocks are maintained at an appropriate level. A recent development is traders operating more of a virtual stand with limited stock on site, instead the customer browses online on a tablet to order and have the product delivered to home after the event.

Sponsors

Most events have an element of sponsorship with each brand wanting to lead the pack in terms of innovation and creativity. Invariably these ‘activations’ involve technology in some form – from basic internet access to more involved interaction using technology such as RFID, GPS, augmented reality and virtual reality.

There are often multiple agencies and suppliers involved with a short window in which to deploy and test just as the rest of the event is reaching its peak of build activity. To be exciting the sponsor wants it to be ‘leading edge’ (or ‘bleeding edge’ as it is sometimes known!), which typically means on the fly testing and fixing.

Media & Broadcast

Media Centre

Busy media centres create demanding technical environments

From a gaggle of photographers wanting to upload their photos to a mobile broadcast centre, the reliance on technology is huge at a big event. Live streaming is increasingly important, both across the site and also out to content distribution networks. These often require special arrangements with guaranteed bandwidth and QoS (Quality of Service) controls to ensure the video or audio stream is not interrupted. It is not unusual to get requests for upwards of 200Mbps for an individual broadcaster.

More and more broadcasters are moving to IP solutions (away from dedicated broadcast circuits) requiring higher capacity and redundancy to ensure the highest availability. These demands increasingly require fibre to the truck or cabin with dedicated fibre runs back to a core hub.

Alongside content distribution, good quality, high density Wi-Fi is essential in a crowded media centre with the emphasis on fast upload speeds. Encoders and decoders are used to distribute video streams around a site creating IPTV networks for both real-time viewing and VoD (Video-on-Demand) applications. The next growth area is 360 degree cameras used to provide a more immersive experience both onsite and for remote watchers.

Attendees

Then after all this there may be public Wi-Fi. For wide-scale public Wi-Fi (as opposed to a small hotspot) it is typical over the duration of an event for at least 50% of the attendees to use the network at some point – the usage being higher when event specific features are promoted such as smartphone apps and event sponsor activities.

The step-up from normal production services to a large scale public Wi-Fi deployment is significant – a typical production network would be unlikely to see more than 1,000 simultaneous users, but a big public network can see that rise beyond 10,000, requiring higher density and complex network design, as well as significantly greater backhaul connectivity with public usage pulling many terabytes of data over a few days.

With a significant number of users, a large amount of data can be collected anonymously and displayed using an approach known as heat mapping to show where the highest density of users are and how users move around an event site. This information is very useful for planning and event management.

crowd

Public Wi-Fi has to deal with thousands of simultaneous connections

Break It Down

As the final band is doing their encore, or the show announces it is time to close the team switch to follow the carefully designed break down plan. What can take weeks to build is removed within a couple of days, loaded into lorries, shipped back to the warehouse to be reconfigured and sent out to next event. Sometimes tight scheduling means equipment goes straight from one country or job to the next. But not everything is removed at once, a subset of services remains for the organisers whilst they clear the site until the last cabin is lifted onto a lorry and we remove the last Wi-Fi access point and phone.

The change over the last five years has been rapid and shows no sign of slowing down as demand increases and services evolve. Services such as personal live streaming, augmented reality, location tracking and other interactive features are all continuing to push demands further.

So yes we provide the Wi-Fi at events but when you see an Etherlive event network on your phone spare a thought as to what goes on behind the scenes.

 

x-default

In the first Behind the Wi-Fi blog we looked at some of the physical aspects of building out a large scale temporary network, this time we look at how it all comes together as a ‘logical network’ or more simply how all of the networking components work together. With some event networks servicing 10,000+ simultaneous users and consuming anywhere between 100Mbps to 1Gbps of internet connectivity, chaos would ensue unless it was carefully designed and implemented.

Although networks are thought of as being one big entity in reality they are broken down into many ‘virtual networks’ which operate independently and are isolated from each other. This approach is very important from a management, security, reliability and performance point of view. For example, you would not want public users being able to access a network that is being used for payment transactions.

All of our events are rated based on a complexity score and this helps define how the network is designed. Larger and more complex events are designed using a fully routed topology rather than a simple flat design. This approach provides the best performance and resilience operating a bit like the electricity ‘grid’ network where a number of nodes are connected together in a resilient manner to provide a multipath backbone and then the customer services are connected to the nodes. This approach means that each node is provided with a level of isolation and protection which is not possible on a simpler flat network.

This isolation is important as a network grows due to the way when devices connect they are designed to send out ‘broadcasts’ to everyone on the network. With a large number of devices these broadcasts can become overwhelming on a flat network but on a routed network the broadcasts can be filtered out at the appropriate node. Faulty or incorrectly configured equipment can sometimes cause ‘network storms’ where huge amounts of network traffic is created in milliseconds reducing performance for all users, a routed topology offers much more protection against this isolating any problems to a small subsection of the network.

Every site has different network requirements so there may be anywhere between 5 and 50 virtual networks known as VLANS to ensure all the appropriate users and network traffic are kept separate. Traffic shaping rules are applied to these different networks to prioritise the most important networks, along with filtering and logging as required.

At the heart of this is what we call the ‘core’, the set of components which control the key aspects of the network such as the internet access, filtering, firewall, authentication, routing, wireless management, remote access and monitoring.

With several different connections to the internet, traffic is distributed across the different connections – this may be by load balancing, bonding, or policy routing. This is a complex area as different types of network traffic may only be suitable for certain types of connection. For example, voice traffic and encrypted VPNs do not work well over a satellite link due to the high latency (delay) of satellite.

The core routers also contain a firewall, this is the protection between the external internet and the internal network. Protecting against intrusion and hacking is sadly a very important factor with all internet connected systems subject to a constant stream of attacks from remote hackers in places such as China and Russia.

Additional firewalls also exist to control traffic across the internal networks. By default, everything is blocked between networks but for some services limited access may be required across VLANS so specific rules are added – an approach known as pin-holing. Filtering can be used to block particular websites or protocols (such as bit torrent and peer to peer networking); this may be done to protect users from undesirable content or to ensure the performance of the network is maintained.

mediacentre

Prioritisation of voice traffic from phones is important to ensure call quality, especially in a media centre

Rate shaping and queuing are additional important controls to manage bandwidth to specific groups and users ensuring everyone gets the speeds they asked for. This is especially important for real-time services such as voice calls and video streaming. Traffic is managed at a user and network level using dynamic allowances so that all available bandwidth is utilised in the most effective manner without impacting any critical services. Users or networks may be given a guaranteed amount of bandwidth but this may be exceeded in a ‘burst’ mode provided there is spare capacity on the incoming internet links.

The core also houses the PBX, the onsite telephone exchange which manages all the phones and calls with big sites having as many as 200 phones and generating thousands of calls. All the features of a typical office telephone system are implemented with ring groups, voicemail, call forwarding, IVR, etc. As all of the phones are Voice Over IP (VoIP) they are connected via standard network cabling so can easily be moved between locations. Additional numbers and handsets can also be added very quickly.

The vast majority of users these days are connected via the Wi-Fi network which requires careful management and design. The detail behind this would run to several pages so for the purposes of this blog we will keep things relatively simple and look at a few key aspects.

Frequency/Standard – Wi-Fi currently operates at two frequencies, 2.4 GHz and 5 GHz. As discussed in previous blogs there are many issues around 2.4 GHz so all primary access we provide is focussed on 5 GHz with only public access and some other legacy devices connected via 2.4 GHz. All of the Wi-Fi access points we use are at least 802.11n capable with the majority now 802.11ac enabled to provide the highest speeds and capacity.

Wireless Network Names – When you look for a wireless network on a device you see a list of available networks, these identifiers are known as SSIDs and control the connection method to the network. Different SSIDs will be used for different audiences, with some SSIDs hidden such that you can only try to connect to it if you know the name. Wireless access points can broadcast multiple SSIDs at the same time but there are limits and best practice as to how many should be used. Some SSIDs may be available across the entire network whereas others may be limited to specific areas.

Encryption & Authentication – These two areas are sometimes confused but relate to two very different aspects. Encryption deals with the way the information which is sent wirelessly is scrambled to avoid any unauthorised access. It is similar to using a website starting with ‘https’ but in this case all information between the device and the wireless access point is encrypted. There are several standards for doing this and we use WPA2 which is the current leader. Not all networks are encrypted and, as is the case with most public Wi-Fi hotspots, public access is generally unencrypted.

Authentication deals with whether a user is allowed to use a particular network and ranges from ‘open access’ where a user just clicks on an accept button for the terms and conditions through classic username/password credentials and onto RADIUS or certificate based systems which offer the highest levels of protection. One common approach is the use of a pre-shared key or pass-phrase as part of the WPA standard, knowing the pass-phrase is in effect an authentication challenge. The pass-phrase is also the seed for the encryption and the longer the pass-phrase the harder it is for a hacker to crack the encryption. The pass-phrase approach is simple to manage but has inherent weakness in that it is easily compromised by sharing between users with no control.

boat

Large scale Wi-Fi is a particularly complex area with many different requirements and challenges

On top of this various other services are employed to protect and manage the Wi-Fi. Client isolation for example stops a user on the network from seeing any network traffic from another user, whereas band steering & load balancing seamlessly move users between frequencies and wireless access points to ensure each user gets the best experience.

The rise of the smartphone has had a major impact on Wi-Fi networks at events due to the way they behave. If a smartphone has its Wi-Fi turned on, then it constantly hunts and probes for Wi-Fi networks so even in this ‘un-associated’ state it still creates an element of load on the network. Mechanisms have to be employed to drop the devices from the network unless they are truly connected (‘associated’) and active (accessing a web page for example). Even connected devices are typically dropped fairly quickly once they cease to be active so that other users can connect. This all happens very fast and transparently to the user with the device reconnecting automatically when it needs to.

This array of logical controls processes millions of pieces of information every second routing them like letters to the correct address, discarding damaged or undesirable ones and acknowledging when they have been received. Each of the components have to work in harmony with sites having anywhere up to around 30 routers, 200 network switches and 200 Wi-Fi access points. To manage this standard configurations and builds are used which have been pre-tested as this reduces the risk of introducing a problem via a new firmware or configuration change.

Next time in the final part of this series we will look at how this all comes together to deliver the end services for the users and the impact it all has on the event.

 

photo credit: Binary code via photopin (license)

fibre

“You guys do Wi-Fi at events right?” typically is the way most people remember us, the irony that the invisible part of our service is in reality the most visible. Unless you know what you are looking for at a large event site you are unlikely to notice the extensive array of technology quietly beating away like a heart.

From walking up to the entrance and having your ticket scanned, watching screens and digital signage, using a smartphone app or buying something on your credit card before you leave, today’s event experience is woven with technology touchpoints. Watching a live stream remotely or scrolling through social media content also rely on an infrastructure which supports attendees, the production team, artists, stewards, security, traders & exhibitors, broadcasters, sponsors and just about everyone else involved.

During a big event the humble cables and components which enable all of this may deal with over 25 billion individual electronic packets of data – all of which have to be delivered to the correct location in milliseconds.

In the first of three blogs looking behind the scenes we take a look at how the core network infrastructure is put together.

Let’s Get Physical

When an event organiser starts the build for an event, often several weeks before live, one of the first things they need is connectivity to the internet. Our team arrives at the same time as the cabins and power to deliver what we call First Day Services – a mix of internet connectivity, Wi-Fi and VoIP telephony for the production team.

Connectivity may be provided by traditional copper services such as ADSL or via satellite but more typically is now via optical fibre or a wireless point to point link as the demands on internet access capacity are ever increasing. Even 100Mbps optic fibre connections are rapidly being surpassed with a need for 1Gbps fibre circuits.

Distribution Board

PSTN, ISDN, ADSL and fibre all are commonplace on a big site

Wireless point-to-point links relay connectivity from a nearby datacentre or other point of presence, however, this introduces additional complexity with the need for tall, stable masts at each end of the link to create the ‘line of sight’ required for a point to point link. To avoid interference and improve speeds the latest generations of links now utilise frequencies as high as 24GHz and 60GHz to provide speeds over 1Gbps. Even with the reliability of fibre and modern wireless links it is still key to have a redundant link too so a second connection is used in parallel to provide a backup.

From there on the network infrastructure is built out alongside the rest of the event infrastructure working closely with the event build schedule. Planning is critical with many sites requiring a network infrastructure as complex as a large company head office, which must be delivered in a matter of days over a large area.

The backbone on many sites is an extensive optical fibre network covering several kilometres and running between the key locations to provide the gigabit and above speeds expected. On some sites a proportion of the fibre is installed permanently – buried into the ground and presented in special cabinets – but in most cases it is loose laid, soft dug, flown, ducted, and ramped around the site. Pulling armoured or CST (corrugated steel tube) fibre over hundreds of metres at a time through bushes, trees, ditches and over structures is no easy task!

Optical fibre cable can run over much longer lengths than copper cable whilst maintaining high speeds, however, it is harder to work with requiring, for example, an exotically named ‘fusion splicer’ to join fibre cores together. On one current event which uses a mix of 8, 16 and 24 core fibre there are over 1,200 terminations and splices on the 5.5km of fibre. With the network now a critical element redundancy is important so the fibre is deployed in ‘rings’ so that all locations are serviced from two independent pieces of fibre – a tactic known as ‘diverse routing’ – so that if one piece of fibre becomes damaged the network continues to operate at full speed.

Each secure fibre break-out point, known as a Point of Presence (POP), is furnished with routing and switching hardware within a special weatherproof and temperature controlled cabinet to connect up the copper cabling which is used to provide the services at the end point such as VoIP phones, Wi-Fi Access points, PDQs and CCTV cameras.

Each cabinet is fed power from the nearest generator on a 16-amp feed and contains a UPS (Uninterruptible Power Supply) to clean up any power spikes and ensure that if the power fails not only does everything keep running on battery but also an alert is generated so that the power can be restored before the battery runs out.

Although wireless technology is used on sites there is still a lot of traditional copper cabling using CAT5 as this means power can be delivered along the same cable to the end device. Another aspect is speed, with most wireless devices limited to around 450Mbps and shared between multiple users the actual speed is too low for demanding services, whereas CAT5 will happily run at 1Gbps to each user.

For critical reliability wireless also has risks from interference so where possible it is kept to non-critical services but there are always times when it is the only option so dedicated ‘Point-to-Point’ links are used – these are similar to normal Wi-Fi but use special antennas and protocols to improve performance and reliability.

Cheery picker

A head for heights is important for some installs!

Another significant technology on site is VDSL (Very High Bit-Rate DSL), similar in nature to ADSL used at home but run in a closed environment and at much higher speeds. It is the same technology as is used for the BT Infinity service enabling high speed connections over a copper cable up to around 800m in length (as opposed to 100m for Ethernet).

All of these approaches are used to build out the network to each location which requires a network service be it a payment terminal (PDQ) on a stand to a CCTV camera perched high up on a stage. Although there is a detailed site plan, event sites are always subject to changes so our teams have to think on their feet as the site evolves during the build period. Running cables to the top of structures and marquees can be particularly difficult requiring the use of cherry pickers to get the required height.

After the event all of the fibre is coiled back up and sent back to our warehouse for re-use and storage. The copper cable is also gathered up but is not suitable for re-use so instead it is all recycled.

The deployment of the core network is a heavy lift in terms of physical effort but the next step is just as demanding – the logical network is how everything is configured to work together using many ‘virtual networks’ and routing protocols. In part 2 we will take a look at the logical network and the magic behind it.

 

Photo Credit: Fibre Optic via photopin (license)

x-default

Computer users are familiar with viruses and malware but the term ‘ransomware’ is a relative newcomer brought to prominence after several highly publicised cases. In 2014 the Sony attack brought ransomware into the headlines costing the company millions and effectively taking the entire company’s computer network offline. Attacks have continued to rise with 2016 expected to reach a new peak and with more sophisticated forms. In April 2016 a cryptolocker variant which had users home addresses started to appear tricking people into thinking it was legitimate link.

The principle behind ransomware is straightforward, a user’s computer becomes infected via one of the normal routes such as clicking on a URL in an email but instead of installing a virus which is annoying or disruptive, the software encrypts all, or a subset, of the user’s files rendering them unreadable unless the user agrees to pay a ransom to recover the key to un-encrypt them. With modern encryption techniques there is no realistic way of un-encrypting without the key.

Alongside the rise of ransomware users are increasingly taking advantage of file synchronisation services such as Google Drive, Microsoft OneDrive, Drop Box & Box which are great for maintaining files across multiple devices and providing a transparent backup of files. The downside of these services is that if a file becomes corrupted or infected with ransomware such as Cryptolocker on one device the damaged or infected file quickly replicates across all devices.

For event staff sharing files across teams and sending out links to files on cloud based services the risk is high. It only takes a moment, one click on a URL in an email from a known source and suddenly you have a potential disaster on your hands at a critical moment.

Avoiding infection is always the most desirable approach and there is no excuse for not running a real-time virus scanner with up to date virus definitions. There are plenty available and some of these are available free or built into the operating system as with Microsoft Windows 8 and 10. No virus scanner is infallible but they are an important line of defence.

Taking a few moments to double check an email or URL before clicking on it can save hours of frustration – the scammers are well versed on how to make an email and URL look genuine. Better still, don’t click the link but login to the cloud service directly from a browser and navigate to the new content – it takes a few moments longer but is much safer.

The proliferation of file synchronisation services has tended to mean people focus less on traditional backups but this can create a data recovery disaster if a user suffers a ransomware attack as all instances of the files become infected. The solution is to ensure that multi-version file history is enabled. Each of the synchronisation services provide this in slightly different ways and to different levels (in some cases it is a paid extra) but the principle is the same – when a file is changed the previous version (or versions) are still stored and can be reverted to. If you suffer an attack you can revert to an earlier, non-infected version.

For extra piece of mind, especially for critical documents, a weekly backup onto a USB memory stick or a writeable DVD which is then put away in a secure location is cheap and effective. Spending a few minutes now to make sure you have a backup strategy can save hours of time, stress and potential cost at a later date as sadly these attacks will continue to increase in frequency and sophistication.

Photo credit: Cryptolocker ransomware via photopin (license)

Event technology plays a major role in the way we plan and organize our events today. According to the below infographic, which takes a close look at the impact of technology on the success of events in 2016, a huge 75% of event professionals are expected to buy apps to facilitate engagement with their audience. Many companies have also stepped up their live streaming activities to reach a larger audience and stand out from the competition. Social media, which offers companies powerful opportunities to promote event awareness or create a new information channel, remains another top favourite.

Of course all of this introduces potential complexity which requires detailed knowledge and planning across a broad spectrum of technology. With the summer season of events already ramping up fast it is critical that organisers plan well in advance and work with the right experienced people to ensure all the different aspects are integrated into a realistic and workable solution. Last minute panics on-site are not desirable and generally push up costs, a well planned, integrated approach is much better!

Source: http://www.losberger.co.uk/

Event Technology: Will This Define Success in 2016?

10608611895_e542f1c904_k

Easter always marks a transition point for us – from delivering service primarily to indoor events to the large scale outdoor events. With Easter chocolate consumed there is a rapid ramp in activity both internally and from our customers as plans are finalised and delivery commences in what becomes a back-to-back run until October.

Every year there is talk of ‘the next big thing’ and exciting technologies on the horizon but in reality at the sharp end of delivery the evolution, rather than revolution, of key services is just as important. So with the summer ramp about to start here are four key event technology areas to focus on.

Connectivity

It all starts with connectivity and if one thing is certain it’s that events need more capacity each year. From the data we have gathered over the last eight years you could probably build a complex theorem about the increase rate but in general we see a need for at least a 25-35% increase year on year, and often more depending on what additional services are required. Lack of internet capacity on site remains one of the most common and frustrating issues at events and this is normally down to a lack of budget or not spotting potential issues like high usage due to a mobile app or streaming.

There are trigger points at which existing services such as ADSL, FTTC (the next generation of ADSL), satellite and certain fibre services become limiting and need to be replaced with higher capacity solutions and many of those services can have significant lead times so it is important to plan connectivity as soon as possible.

Payment Systems

The debate around traditional ‘chip & PIN’, closed loop payment systems (wristbands) and open loop systems (‘contactless’) may be ongoing but it doesn’t really matter which route you choose; attendees, exhibitors and traders simply want payment systems that work.

Early, clear communication on what solutions are available at an event is critical as traders and exhibitors need support through this somewhat complex & confusing area. Expecting mobile GPRS payment terminals to work reliably on a crowded event site is crazy and can have a significant impact on revenue.

System Integration

Each year the integration between different aspects of technology at events becomes more complex and the need to coordinate and manage all the different requirements becomes more important. From the basics of wireless spectrum management & access control, to the adhoc needs of sponsors, audio & broadcasters, each requirement can have an impact on the success of an event so the sooner it is identified the better it can be dealt with.

Safety & Security

The area of safety and security breaks into two areas – the use of technology to help manage and secure the event, and the security of the technology itself.

Sadly, hacking isn’t just something that happens to governments and large companies, it is a continuous real threat. Externally we see frequent attempts to access services and systems from locations such as Russia and China. This is going on all the time across the internet and event sites are just as prone to access attempts as any other internet node.

Risks also exist within an event site, generally from people just trying to access Wi-Fi networks but sometimes the intent is more sinister. With so many critical services running on event networks maintaining appropriate security is essential. Encrypted, managed networks, strong authentication, intrusion detection, client isolation and firewalls are just some of the techniques required to keep the network secure.

Using technology to keep an event site physically safe and secure has become increasingly important over the last few years. The obvious aspect is CCTV with high definition cameras capable of excellent detail and response but there is much more available to organisers. Visibility of real-time access control data from gates, scans of social media streams, Automatic Number Plate Recognition (ANPR) of vehicles entering a site and ‘heat mapping’ of devices across an event site can all be combined to provide an insight to event control of what is happening on site.

Event technology has already come a long way from just being about internet access and it continues to evolve rapidly but this evolution and dependence requires an increased focus on planning to ensure it all comes together seamlessly.

Etherlive is working with several customers who are preparing their venues and various production organisations to support the UK General Election happening on May 7th 2015. Many of the event teams are working on similar aspects and issues; here are our top tips

Audit Venues (first and early!) – Many venues are level setting customers’ expectations on how many concurrent wireless connections they can support and what internet access is available but site visits to confirm this data is critical. The earlier the site visit the more opportunity both the venue and the production team have time to address any issues; for example arranging more capacity on the core internet access temporarily or increasing Wi-Fi density & capacity in certain areas.

Consider Demand – In 2010 when the poles closed the first generation iPad had just been launched with many people still considering it a fad. Now most people, and certainly press, carry multiple devices which need high speed connectivity – their phone, tablet, laptop and potentially even watch! Twitter users (around 70,000 then) were sending around 50 million tweets per day, now it’s ten times that. Facebook, just becoming main stream in 2010, now includes video streaming and people routinely use Skype and FaceTime for their calls whilst cloud based data services such as Dropbox, Office 365 and Google Docs are commonplace.

Delivering event wifi to the debates

Delivering event Wi-Fi to the debates

Consider Security – A little discussed element of Wi-Fi is how there are many ways of deploying it with (or without) security & encryption. Recent press on the Sony hack and others should mean that organisers check what level of security is being provided. At worse this should at least be a number of individual networks for organisers, candidates, media and attendees. The preference should be for authentication and encryption with suitable logging and monitoring.

Have a Backup Plan – Consider what happens if the internet connection breaks. Is there a second connection that can be used if required? Could desperate users be taken to a different area at least to upload their photos and emails?

Engage Attendees – Similar to the needs of the media, organisers and those attending events will be keen to remain connected to social media and their own commitments. Providing news feeds, twitter walls and video screens relaying up the minute information all help to create a buzz and promote interaction.

Regardless if you are supporting the election through hosting an event at your venue, or responsible for organising one, successful technology delivery will be a key factor.

Announcements from Apple always have a certain sparkle; their PR is the slickest, their presentation is faultless (although in this case it showed even the best can have technology problems as the video stream faltered frequently) and, most importantly, they have a knack of defining a market.

Apple were not the first with a portable mp3 player, yet the others are long forgotten as the iPod defined the genre. Before the iPad was launched in 2010 many, many tablets had come and gone. Arguably technology had finally caught up and the introduction of the iPad has allowed a generation to enjoy lightweight computing without overheating laptops on laps, creaking screens and tapping keyboards.

The announcement today of the iPhone 6, the iPhone 6 plus and the Apple Watch are exciting in themselves. The phone models are the extension of the brand we all know well; though this time with an increase in screen size (4.7” and 5.5” respectively vs the typical iPhone 4”). The Apple Watch is an extension of the handset with a screen that allows access to apps, information, maps and much more.

However, there is one absolutely critical technology included in all three products which the market has demanded for some time; Near Field Communications (NFC).

Can a push from Apple get cashless moving and vanish those queues?

Can a push from Apple get cashless moving and vanish those queues?

The inclusion of NFC facilitates payment for goods directly from the device. A swipe of the phone, or now watch, against an NFC reader allows the transaction to complete. Again other manufacturers have offered this for some time, but it takes an influencer like Apple to really drive customer awareness.

One thing Apple are experts at is understanding that it takes more than just technology to go from niche interest to mainstream – it’s about the complete package. The iPod owes much of its success to iTunes which in turn was successful because Apple had lined up a huge catalogue from all the record labels.

In this case it’s not just about the inclusion of NFC, it’s as much about the launch of Apple Pay where they have already lined up Mastercard and Visa as launch partners in the US, along with retailers such as Subway and McDonalds. In a smart move Apple has also said that with Apple Pay they have no access or visibility to the transaction data, quelling fears over data protection which could have been a hindrance.

How powerful will it be to use something on your wrist to process payment? Very.

What does this mean for events? So far open-loop and closed-loop contact less payment systems at events have seen slow adoption, partly due to implementation cost and lack of agreed standards, and partly due to customer resistance due to privacy concerns.

Although it will still take time for suitable penetration of the new devices this long awaited inclusion will accelerate and change the landscape for mobile/contactless payment and associated services.

Those without a strategy for contactless payment systems need to start working out how best to take advantage of a system which allows immediate transactions without the need to top up cash (and then bank it the other side).

It also puts into doubt the longer term viability of proprietary closed loop systems as users are more likely to trust well known established names which have a broader acceptance.

For event organisers it also means more consideration for the ancillary services like charging and, of course, connectivity which all of this relies on.

Whatever happens, if anything was going to highlight NFC technology to the wider world (whether what they buy has an Apple logo on it or not) this is it.