Search

What will music be like in 20 years? - BBC News

What will music be like in 20 years? - BBC News

Can’t get enough technical death metal? Good news for fans of this high-precision, high-velocity musical subgenre: the indefatigable Dadabots have been thrashing it out live on YouTube 24 hours a day, seven days a week since March – without ever repeating themselves. Quite the feat. Or it would be if Dadabots was a human band; but as you’ve probably already guessed, it’s actually a neural network generating ‘new’ music on the fly.

More like this:
- Is this the world’s first good robot album?
- Can data reveal the saddest number one song ever?
- What is the world’s hardest singing style?

Although computer systems that produce musical pastiches are nothing new, most experiments to date have focused on classics: Bach and Beethoven, for example. Yet while the western classical canon is dominated by harmony, much contemporary music is distinguished not so much by tunes, but timbre. Different genres vary in their sonic textures as much or more than in their melodic figures – a trend that has accelerated as technology has affected recording, production and performance.

The Dadabots project is on the fringes both musically and technologically, but it highlights a trend that’s likely to dominate the next 20 years in popular music. Really significant change, in music as elsewhere, comes from disruptive innovations that most don’t see coming: few outside Silicon Valley anticipated the iPod. That said, just as the past two decades were dominated by the shift from physical to digital – from compact discs to MP3s and now streams – so the next two are likely to be dominated by the shift from the digital to music created and curated by algorithms.

Given the huge success of automated musical platforms like Spotify, you might think that transition has already happened. But as the compact disc (released 1982) was just the first consumer manifestation of a shift to digital that was already well underway in the studio, and only truly came of age with the appearance of Napster (1999) and the iPod (2001), when it comes to algorithmic music, we’re having our compact-disc moment, not our iPod moment. Over the next two decades, we can look forward to smart machines transforming how we make, discover and commune with music.

Playing by ear

Rebecca Fiebrink’s musical controller doesn’t look like much: it’s just a micro:bit, a barebones educational computer that fits into your hand. It’s the software that makes it come to life: a system called Wekinator, which aims to simplify next-generation music-making using machine learning. Wekinator learns from human actions and associates them with computer responses, thus eliminating the need to code.

Fiebrink holds the controller in the air and associates it with a sound, then repeats the process at another point in space with another sound. Moving between the two points results in a smooth transition from one sound to another – a pleasing effect. But it’s when she moves the controller to a third point, off the line, that it really comes into its own: the Wekinator creates a new sound. If you like it, you can keep it and add more; if not, you can try again.

This is just one of innumerable ways the Wekinator can be used, from performances that combine music and movement to letting a snake make music. Fiebrink, a senior lecturer in Computing at Goldsmiths, University of London, says it is for making new musical instruments and experiences that couldn’t be played on a standard MIDI keyboard – allowing humans to ‘demonstrate’ good music to the computer through trial and error, rather than trying to code for it in advance. “Describing a good melody is really hard, whether in music or in maths,” says Fiebrink. “But it’s really easy to play a melody we like.”

Like the Dadabots project, the Wekinator tackles the central problem of modern musical composition: finding interesting places in nine-dimensional ‘timbre space’ – something the previous two generations of musicians might have done with samples, or effects pedals – and then charting paths between them to make music. In both cases, human input is critical.

Musicians are already experimenting with AI, but there’s no set model for how this works yet. The musicians Holly Herndon and Darren Cunningham (who records as Actress) have both chosen to describe their AI in terms of collaborators. Herndon calls her “AI baby” Spawn; Cunningham’s alter ego is Young Paint. But there are many ways it could work. Musicians could feed their output to AI, and choose from the innumerable variations it produces; or they could use it to brainstorm new ideas. Or they could automate tasks that some find mind-numbingly dull – using ‘machine learning’ to mix their music, for example, rather than endlessly fiddling with pre-sets and levels.

The idea of using technology to improve performance is by no means new: AutoTune, a system that corrects out-of-tune notes, has made many musicians’ careers

For the rest of us, the payoff of AI musical technology will come in the ability to enjoy meaningful creative experiences. The idea of using technology to improve performance is by no means new: AutoTune, a system that corrects out-of-tune notes, has made many musicians’ careers; recently, researchers used data from the karaoke app Smule to train an AI system to do the same thing more naturally. But the potential of AI is to let anyone compose, as well as perform music. The start-up Vocalea takes the process one step further: sing or beatbox into its Dubler system, and what comes out is fully instrumented music.

We know from experience that new musical forms will emerge alongside new forms of technology –amplification created rock ’n’ roll, and PlayStation software spawned grime. What musical genres will come into being over the next two decades is anyone’s guess, but there will undoubtedly be a lot of it – adding to the millions of tracks already posted to Spotify, YouTube, SoundCloud, and the rest. That creates a new problem: how will we ever navigate it all?

The soundtrack of your life

Once music went digital, there was little reason for the single or album formats to persist: but sorting through billions of MP3s itself presented a new and unfamiliar challenge – particularly when they were posted on peer-to-peer sharing networks like Napster or via BitTorrent, with little categorisation and lots of mislabelling. Search engines, music websites, blogs and community forums all played their part in helping people pick their way through the morass.

In a matter of a few years, the basic units of music have for many listeners become the social-media post, the YouTube rip and the Spotify playlist

Until the algorithms took over, that is. In a matter of a few years, the basic units of music have for many listeners become the social-media post, the YouTube rip and the Spotify playlist. In each case, it’s an algorithm that will decide what you hear – and critically, what you’ll hear next. The publishers and promoters of music, who once cosily scratched each other’s backs to choose hits and make stars, these days spend their days slugging it out just to get their acts heard.

“It’s never been easier to get music out there – anyone can do that,” says Richard O’Brien, founder of media-industry consultancy Encyclomedia. “But it’s much harder to stand out, because people listen to so much more: you can’t have just put out a track and ensure people listen to it over and over again.”

Some of the tricks are relatively simple: the CEO of Universal Music recently advised acts to make sure their song’s catchiest part – generally its chorus – is also its title, to make it easy for people to request from smart speakers, such as Alexa. Others are more complicated: O’Brien says the making of an “overnight sensation” these days often involves as much as two years of low-key testing and complex data analysis to see what will catch listeners’ attention – or at least the attention of the recommendation algorithms.

Slave to the algorithm

Faced with all this, why bother with humans at all? Rumours persist about the artists who seem to have no existence outside music platforms. Today, the expectation is that they are journeyman musicians riding the algorithmic tide, but it’s easy to imagine algorithms taking their place over the next few years – and again, there are start-ups aiming to turn out everything ranging from pop hits to ambient soundscapes. In fact, Warner Brothers hit the headlines last month when it ‘signed an algorithm’ to a 20-album deal – the German app Endel, which generates soundtracks that respond to a listener’s activities: working, commuting or sleeping.

So far, Endel does not seem a runaway success in recorded format: fewer than 2000 people listened to its five sleep-themed albums on Spotify last month – although that it is probably no worse than many of the millions of human artists languishing in obscurity on the platform. Yet the algorithmic platforms still have some way to go, suggests O’Brien.

Many of our most precious musical memories are the result of memorable occasions, happy accidents or just plain chance

“The algorithmic tools used for discovery are very clever ways of making a recommendation, based on what they know about you, but recommendations have only ever been one way of discovering music,” he says. Browsing is another: there still isn’t a really good digital equivalent to riffling through vinyl records. And part of music’s appeal is the way it’s woven into our cultural and social lives. Many of our most precious musical memories are the result of memorable occasions, happy accidents or just plain chance. That’s hard for an algorithm that only knows you through a screen to replicate.

Given 20 years, that might change. Endel already keeps tabs on your location, motion and heartbeat; smart assistants that know about your life, perhaps abetted by emerging technologies such as ‘mind-reading’ headsets, could make a better stab at predicting what kind of music you’re in the mood for. And an AI could serve it up, whether generating it from scratch, curating a playlist on the fly, or parametrically adjusting the mood of your existing favourites – shifting them downtempo for relaxed times, or sparking them up for the hustle and bustle.

That promise of an always-on DJ, perfectly attuned to your moods and activities, might seem perfection to some. To others, it will seem stifling, privacy-violating and inhuman – depriving us of the chance to tell our own stories about how we find and enjoy music. That’s where O’Brien’s final plank of musical discovery comes in. “The physical format – whether it’s live music or packaging or merchandise – is a way of building fanbases, precisely because it’s not so immediate,” he says. And that by definition can’t be replicated online. Or can it?

Virtual tours

With earnings from streams amount to a decent living for only the top few percent of artists, playing live has become a crucial part of the mix for working musicians, although most can only dream of audiences numbering in the thousands. Most must have looked on in awe, then, at the number of people who recently turned up to a set by DJ Marshmello: at one point, there were more than ten million of them. Not bad for a man who wears a bucket on his head to hide his identity – even in the virtual world of Fortnite, the spectacularly popular game where his ‘concert’ was held.

Industry pundits have been arguing ever since about whether this represents a triumph of marketing or the future of performance. The sceptics say what’s really being showcased is Fortnite’s promotional muscle, an amplification of the meme-driven hype that already takes place on social media in advance of a major artist dropping a new cut. The advocates say it’s genuine participation in a highly immersive experience and it makes sense to go where the audience is.

Another possibility is to make the stars virtual. As with automation of other kinds of work, virtual pop stars have their advantages – they can tour and perform constantly, or indeed in multiple locations at the same time. And for most of their fans, they’re no more unreachable than the members of BTS or One Direction.

What does the future hold for those who prefer performance in the flesh? Today’s audio-visual spectaculars are likely to embrace elements of the virtual, or to use augmented reality to add layers to the gig-going experience. And Rebecca Fiebrink’s work points the way to performances that more closely integrate music, movement and visuals – whether those are bombastic affairs or more personal experiences. But does this all mean the machines will have taken over?

No. At one extreme, it’s quite possible to imagine entirely synthetic experiences: shows held in virtual reality by entirely digital stars, performing music generated by AI composers. But at the other, the most grass-roots forms of musical performance have survived every technological revolution: we still have drum circles. And in the middle ground, there is space for humans and algorithms to collaborate on new and exciting performances.

Coral Manton recently organised an algorave at the British Library in London – a rave where the DJs generate music algorithmically, often hacking code in real time. The result is raw, glitchy, sometimes unlistenable and undanceable – and popular. There were queues around the block for the event, Manton says, in a year that’s seen algoraves proliferate around the world.

Decoding the DJ

What’s the appeal? “It’s the punk end of electronic music,” says Manton, who, like many involved in the scene isn’t a musician first and foremost: she works as a digital artist at Plymouth University. “Why does anyone pick up a guitar? I picked up a laptop and started playing with it: like anything, it’s easy to start, but hard to get good.”

Manton draws a contrast with the polished performances of electronica performers and DJs, where everything runs seamlessly at the press of a button. At an algorave, by contrast, software crashes, networks go down and the algorithms can run amok. “The best performances are those where you don’t know what’s going to happen and you get a sense of figuring it out,” says Manton. Because the code is usually projected alongside the visuals, the audience can see that happening in real time.

Just as punk rebelled against the conformity of the 1970s, so algoraves pick up on discontent with the digital lifestyle. There’s a strongly open-source ethos – a protest, perhaps, against the perfectionist pressures of social media and relentless commercialisation of online spaces. No doubt algorave will in due course be commercialised and commodified, as youthful protest music always has been. But for the moment, it acts as a reminder that as long as there are human listeners, technology will continue to democratise, not dominate, music: to provoke emotions and sensations just as it has since antiquity.

If you would like to comment on this story or anything else you have seen on BBC Culture, head over to our Facebook page or message us on Twitter.

And if you liked this story, sign up for the weekly bbc.com features newsletter, called The Essential List. A handpicked selection of stories from BBC Future, Culture, Capital and Travel, delivered to your inbox every Friday.

Let's block ads! (Why?)



2019-05-22 09:24:21Z

Bagikan Berita Ini

0 Response to "What will music be like in 20 years? - BBC News"

Post a Comment

Powered by Blogger.