Jumping off the Shoulders of Giants: Speaking at TEDx Cambridge University

Dr Aaron Ralby TedX

Last week I had the pleasure of speaking at the Cambridge University TEDx. This was my first time giving a TEDx talk, and I was very excited about the opportunity to share my work and passions on such an amazing platform. It will still be several months before the talks are edited and put online, but I thought I would share some of my experiences from both the day itself, as well as from the preparation for the conference.

I was contacted on 1 October last year with an invitation to speak at this year's TEDx conference on 11 February 2017. Over the coming months, I was given details on the length and format of the talk, what could and could not be included, and the structure of the conference itself. There were a total of 16 speakers divided equally into two sessions: eight speakers in a morning session, and eight speakers in an afternoon session. There was a separate audience for each session, so everyone from the morning left at lunchtime and a new audience came in for the afternoon talks. I presented in the afternoon.

There were a number of technical issues on the day. This being Cambridge, many of the buildings are old and made of either stone or brick. Difficulties with electronics and signals are not uncommon. During the morning session, a couple of talks had to be stopped and restarted because of issues with the audio recording. Fortunately, this does not seem to be an issue that affected my own talk, but instead I faced troubles with a nonfunctional clicker.

The day before the conference there was a dress rehearsal, and some of these issues with sound and the slide clicker had been identified. It being only the evening before, there was not ample time for the production team to solve all of these issues. Since I knew the clicker might not work – and since my talk relied heavily on carefully timed transitions of slides – I prepared as best I could. I went home after the dress rehearsal and typed out a full version of my talk, then printed it out and highlighted every word on which I wanted a slide advanced. The morning of the conference, I gave this copy to the booth manager so she could follow along if necessary and advance the slides on the right words.

When I stepped out on stage, it was unclear whether the clicker would work or not. It became quickly apparent that it was not working properly. The first video started, then paused after only a couple of seconds. I started again, but the slides advanced to the third slide, skipping over the one in between. Given the amount of work that went into preparing the slides and the talk as a whole, as well as the fact that this was in front of a live audience and being recorded for future distribution, it would have been easy to get frustrated, nervous, and flustered. As frustrating as it genuinely was, I had prepared for this eventuality, and I knew the best option would be to bear it with humor.

After a bit of banter with the production team, we decided that I would not use the clicker at all, but signal to the booth when I needed a slide advanced. This ended up working well throughout the talk, but I did feel strange signaling quite distinctly every time I wanted a slide advanced. Even though the booth manager had the copy of my talk with transitions, I didn’t want to take any chances. So as to make the signals clear, I would raise my hand and flick my wrist toward the booth – as a result, it probably looked as though I were using a fishing rod throughout the talk!

Technical difficulties aside, I was really pleased with how the talk went and how it was received. It relied heavily on the slides themselves, so it was a tremendous relief to be able to get through them correctly and with proper timing. Each slide was actually a video that connected to the video in the next slide. In this way, my entire talk was essentially a walk-through of a three-dimensional virtual space. It was a memory palace for the talk itself.

The theme of the conference was "jumping off the shoulders of giants." The conference explored whether we can actually innovate entirely new creations and disciplines, or whether we are continually working off the shoulders of giants. For my own part, I consider the importance of working within – as well as combining – various traditions and lineages.

Beginning the talk with some of the philological giants who inspired me, I then summarized the entire knowledge requirement for developing fluency in a language, using a virtual room to show all of the materials necessary to learn. In this way, the audience was presented with a to-scale representation of everything one would need to learn in order to develop fluency. With such a daunting amount of material to internalize, the next question was of course: how do you learn it?

I then introduced the giants from the Middle Ages who inspired me for their use of memory systems. We looked at how spatial memory systems can be used for learning large and complex subjects, such as languages. Using an example of Spanish verbs, I was able to take the audience into our VR software for memory palaces, Macunx VR.

I would have to say that two personal highlights from the talk were unfurling one of our maps of Arabic for the audience to see the actual scale and size of a language's grammar, as well as the moment where mnemonics were selected and placed seamlessly into space within Macunx VR. I could hear gasps in the audience at this point.

Aaron showing the Linguisticator Arabic map at the TEDx Cambridge University conference

One of the interesting experiences I had giving this talk was that my slides were themselves a memory palace for the talk itself. All of my talking points were encapsulated into images placed within the 3D space being explored within the slides. This made it virtually impossible to lose my place entirely. In fact, it almost felt as though I were cheating by bringing my entire talk with me onto the stage! At the dress rehearsal the day before I noticed many of the other speakers reviewing notes for their talks, and it occurred to me that it had been several weeks since I had even looked at a text version of my own.

In the lead up to the conference itself, I practiced delivering my speech many times, and I'm grateful to my mother who helped me practice over Skype. After I'd given her the talk twice with the slides over screen share, she was able to repeat back to me my own talk. It wasn't, of course, word-for-word – but she was able to recall every talking point – and the sense conveyed at each talking point – simply by recalling the virtual space that comprised the slides of the talk. That was an exciting moment for me.

In preparing for the talk, I also revisited one of my favorite works on memory: The Book of Memory by Mary Carruthers. In it, there is some discussion and debate between ancient and medieval scholars on the usefulness of creating and storing mnemonics for individual words within a text. In other words, if you are working to remember a text, do you need to create a mnemonic for each and every word, or does it suffice to create a mnemonic for each phrase or sentence? While it is possible to remember texts accurately word-for-word by using mnemonics for each and every word, this was regarded as excessive by some scholars, and I tend to agree. Unless the accuracy is absolutely essential, it is better to use a single mnemonic for an entire phrase, sentence, or even paragraph.

When I created the 3D virtual spaces to use in the TEDx talk, I only added images that would hold my main talking points. As I prepared for delivering the speech, however, I noticed myself stumbling in a few places, struggling to remember all of my points or their sequence. In such places, I simply added imaginary mnemonics into the virtual memory palace. So, while the audience saw a simple and minimalist view of my memory palace for the talk itself, I had in my own mind's eye additional images that made it easy to remember each and every sentence within the entire talk. In this way, I could give the talk both forwards and backwards, and start easily at any point from within the presentation. This certainly added to my comfort on stage, and particularly made me feel more at ease considering the technical setbacks on the day. Should something have happened to derail me in the middle of the talk, it would have been easy to pick up exactly where I had left off.

In the run-up to the conference, and even on the day itself, I was, of course, focused mostly on making sure I delivered the best presentation I could. It was, however, an amazing experience meeting all of the other amazing speakers who came to the conference. As a fellow speaker, I could attend both the morning and the afternoon sessions, even though I was myself only speaking in the afternoon. There were a few presentations that stood out as particularly excellent.

In the morning session there was Toby McCartney from Macrebur (pictured left), a company recycling waste plastics for creating a new type of road paving system. The company has recently won Richard Branson's Virgin startup award, and is currently well on its way to raising several hundred thousand pounds of funding to take things to the next level. I had the pleasure of meeting Toby in the evening after the conference had finished. He was generous of spirit and took to heart the opportunity speaking on such an amazing platform held. This had been evidence in the preparation and delivery of his own presentation.

There was also Julie Krohner (pictured right), who spoke on the use of virtual reality in triggering and developing empathy. Using video examples and case studies from her own experience, Julie showed how this relatively new technology could be used to help humanize interactions between people both over the Internet and face-to-face. Based on the comments of audience members afterwards, it was clear Julie's talk and examples had made a powerful and positive impact.

Finally there was Douwe Kiela (pictured left), a postdoctoral researcher at Facebook AI. Douwe presented a lot of the research from his own PhD, looking at how language can be clustered according to meetings and associations based on different criteria. As human beings, it is easy for us to draw associations between words based on both what and how they mean. For example, we can connect a lake and a waterfall because they are both made of water; but we can likewise associate a waterfall with a lion because of the roaring sound that they make. We could likewise associate a lake with cake because the two words rhyme as words themselves. In developing artificial intelligence, these kind of multisensory associations and clusters need to be programmed in a meaningful way.

There were, of course, several other amazing speakers and talks, but these three stood out to me as particularly interesting and particularly well presented.

Again, it was an amazing experience and opportunity to get to share my work at the Cambridge University TEDx. I'm very grateful for the opportunity, and will look forward to seeing the polished videos from all the speakers once they are edited and online.

Back to blog