Into the second week of teaching and the configuration and setup has changed –
The laptop has gone from an 8yo to new one and the monitor has been upgraded to a 28inch 4K monitor. Being able to see 50 students on the screen at a single point in time is hugely valuable during discussions.
The webcam on the new laptop is clearly better. I want to use the monitor as primary screen while teaching so I’ve been looking at options and configurations. There is a lovely piece on by Guy Kawasaki on how to get your setup just right. As I’m not looking to spend another €2K right now I haven’t gone down this route. I’ve gone for the reasonably standard Logitech C920s.
I considered a number of other options. I played with a paid version of EpocCam both on Wifi and with a wired connection and really wasn’t happy with the quality of the results. The results that come through on the laptop and Zoom are nothing like what you see on the primary camera on the phone. I looked at using my Fujifilm camera but it isn’t in the list of supported models. And it could take quite a while to test workarounds and see if I could get it to work. Time better spent elsewhere. Between installing and optimising a mesh network in the house and cleaning up multiple devices I’ve had enough hardware fun for 2021
GoPro was another option which a friend is using with some success and I’d like to compare the Logitech with GoPro at some point in the future. For now the Logitech will do (once it arrives). I’ll look at some mic options next but the broad setup is nearly there.
Right now looking at improving the teaching and learning process. Once I carve out some time I’m going to do some short pre-record videos. I’ve written up one class example on the strategy cycle.
The one thing I’m trying to get slicker on is switching and screen recording. I’m looking for a very simple way to slice up my classes into smaller bites and the transcription of lectures in Zoom is fairly impressive. Right now Zoom does some odd things as I turn Screen sharing off and on. It flips the screens for display/presenter view which is a minor annoyance but I’d like a way to press a single key to switch between screens I’m displaying.
Logging in as a second user on the iPad has turned out to have two advantages – the availability of using the whiteboard and also seeing the student view of my class though I have to turn the sound down to prevent echo.
Ongoing challenges I’m not quite sure how to solve or if they are solvable – talking, keeping an eye on 50+ students and an eye on chat. In a physical classroom you get a sense of a class, bored, restless, not understanding. Theres a lot more work to do to get that sense when teaching online. There are some broader pedagogical questions to explore here as well that I’m thinking about and I’m mapping out that goes from education to entertainment crosses universities, teaching and training and into events and conferences as well. But that’s for a longer post.
Whats also valuable is a number of conversations with students studying other courses and lecturers teaching across other institutions and its something I really interested to do more of. Talk to more people to figure out good practices that will improve everyones experience.
I’m back teaching in UCD this semester, lecturing to 54 MBA students in Digital Transformation and a similar number of MSc students on Strategy and Innovation. Both the MBA and MSc programmes are designed originally as in class experiences and my own approach is highly interactive with the students in class and has been refined from teaching lots of classes over more than a decade.
Theres more than a slight difference this year due to a pandemic where the teaching will be online. What I’m going to try and do over the next few months is to document my own learnings around the process of teaching and engaging with over 100 students through this medium and to refine my learnings. The image below is the first version of the technical setup for doing it. My trusty Macbook is clocking it at 8 years old at this point and about to be retired in a week or so as my primary device
Core setup for class is
*Macbook as primary device (due an upgrade shortly)
*External second monitor (my 4K monitor was due to arrive before class but is wandering around a DPD depot in Athlone at this point)
* Yeti blue nano mic which when tested is better than the internal mic on the laptop.
* iPad and Apple Pencil. I’m logged in twice to my Zoom accounts with the iPad account as co-host. This does two things (one it lets me see the same view as the students see on their primary screen – more on that later) and it also lets me use the Whiteboard in Zoom as we well as other tools to draw using the Pencil. I’ve an external keyboard and mouse as well
Ringlight so my face not in constant shadow given lighting in the room I’m in
The Current Software I’m using is
Zoom for live classes
Powerpoint for presentations
iPad and Pencil using both Zoom whiteboard and Goodnotes
Typepad for running quizing
Content (readings, videos, discussion boards etc) on Brightspace
Lessons from class 1:
Be careful about muffling or turning off the mic by accident. The easiest thing here will be to swtich to a lapel mic.
Use of chat function in Zoom works but needs to be positioned carefully to catch the eye. I’ve used this for dropping in additional questions. I also used this to share a Quiz which I’d build in Typeform for the first class. I need to get a little more accustomed to the polling function in Zoom and to figure out if I can do freeform text answers. Multichoice quizzes are fine but with 50 people in class one or two sentence answers to freeformat text questions can generate a rich set of examples for the whole class in a short space of time.
Chat is also a good way of checking when people are finished. I used 5 minute countdown timers for a activity and for a midclass break. People typing done into indicated they were finished in the last 30 seconds of the activity
Breakout rooms in Zoom – I’d one planned for this class just as a test but skipped it because of timings on my lesson plan. This was really a test activity and we’ll do a number of these over the next few weeks anyway. I’ve been running a social group using breakout rooms since April so this is one tool I’m well used to.
Changes I want to make at this point.
UCD has some very good teaching and learning materials to support staff. I spent a few days last week doing going through all the materials in reviewing how I’d run the classes
In a physical classroom I work on the basis of switching things around every 15-20 minutes and chunk a 2 hour class in about six chunks. With breaks. Online I’m looking to 6-10 minute information chunks switching activities and interactions and punctuating the material approx. every 10 minutes. Talking to a number of people who are doing fully online course video material frequently comes in 2-10 minute chunks. There still a little bit of work to do on that
Physical arrangement of screens etc I am optimising. While talking I’m also trying to keep half an eye on chat and on 50 faces on screen to see if any hands go up for questions. The 4K monitor will allow me have 50 faces on Zoom at the one time which will help. I suspect upgrading to an external Webcam and new mic will help the physical positioning of the screen
Later today I’ll download the recording of the class and slice in into chucks and reupload for students.
UCD uses Brightspace for online learning platform. Brightspace has what I’d feels right now like superficial elegance. It looks well but I’m not sure it optimises for speed. I’m using the discussion board feature in Brightspace across the course and jumping between threads is clunkier than it should be even if it looks pretty.
Speed workflows and processes are some of the overall things I’m trying to figure out at the moment. I use in class diary submissions by students. This usually involves the students handing up physical sheets of paper which I correct and hand back. The advantage with this is all the pages are together and all the work is in correcting and they’re hadn’t back at the start of the next class. In the online environment theres a layer of additional work in opening and responding to each student individually. Its something I’ve seen with teachers in school as well. It the bit where in class pedagogy doesn’t translate from analogue to digital efficiently.
I’m looking at prerecording some of my material as well. Theres a bit more work to be done on that during the week.
Lots of interesting things to figure out over the next few weeks and I’m really interested in talking other lecturers in terms of what they’ve learned and how they’re working in our new hybrid environment.
Science Fiction writer William Gibson said “The future is already here, it’s just not widely distributed.” When you look around you can see the truth of that statement. Most of the technologies that will influence us over the next few decades already exists. In many ways it feels like we’re living in parts of that future. We can 3-D print replacement jaws for people. And 3D printing was invented over 30 years ago. In NDRC, where I work, we have companies working on embedded sensors for post operative bleed detection, and working on helping kids with focusing and ADHD problems through neuro-feedback game play.  In many ways technology is enriching our lives. In reality the title of this piece is less ‘Our Algorithmic Future’ than ‘Our Algorithmic Present’.
As a technophile that’s very exciting. I have a deep and abiding love of science and the wonderful possibility of technology. I grew up reading Isaac Asimov (his science and his fiction), Arthur C Clarke and Carl Sagan. And watching Star Trek, Tomorrow’s World and other optimistic visions of technology and the future.
At the same time there is a darker side to technology. Paul Erlich said “To err is human, to really foul things up requires a computer.” It’s not hard to find examples. California released 450 high-risk, violent prisoners, on an unsuspecting public in 2011, due to a mistake in its computer programming. ‘We-connect’ an app based vibrator which captures the date and time of each use and the selected vibration settings, and transmits the data — along with the users’ personal email address — to its servers in Canada “Unbeknownst to its customers” a number of whom are now suing the company.
And most dark of all is the case of the firing of elementary school teacher Sarah Wysocki by Washington DC Public schools. The school system used “VAR”, a Value Added statistical tool to measure a teacher’s direct contribution to students test results. Despite being highly regarded in classroom observations the low score from the algorithm led to her being fired. There was no recourse or appeal. And no way to really understand the working of VAR as they are copyrighted and cannot be viewed.
There is this abstract notion of what the computer said or what the data tells us. Much as the complex gibberish that underlay the risk models of economists and financial services companies in the run wasn’t questions (because maths) the issue here isn’t the algorithms as much as people and their magical thinking.
I came across this quote from IPPN Director Sean Cottrell, in his address to 1,000 primary school Principals at Citywest Hotel in 2011. He commented
‘Every calf, cow and bull in the State is registered by the Department of Agriculture & Food in the interests of food traceability. Why isn’t the same tracking technology in place to capture the health, education and care needs of every child?’
Well intentioned as it might be, this shows a poor understanding of cows, a worse understanding technology and dreadful misunderstanding of children and their needs. I find this thinking deeply disturbing, and profoundly creepy so I decided to unpack it a little.
This is how we track cows
And this is how we start that process by tracking calves
And I wondered is this how he’d like to track children? (H/T to @Rowan_Manahan for that last image)
Then I realised that we are already tracking children. Only its not the Primary Principles Network that doing it, it is private companies doing the tracking and tagging. It is Google and Facebook and Snapchat, with some interesting results and some profound ethical questions. We now know that Instagram photos can reveal predictive markers of depression and that Facebook can influence mood, and peoples purchasing habits.
Our algorithm present is composed of both data and algorithms. We have had an exponential growth of processing capability over the last number of years, which has enabled some really amazing developments in technology. Neural Networks emerged first in the 1950s dimmed in the late 1960’s, reemerged in the 1980s and has taken off like wildfire in the last few years.The Neural Network explosion is down to the power, cheapness and availability of GPU’s, together with improvements in the algorithms themselves. And Neural Networks are really really good at some kinds of pattern analysis. We are getting to a point where they are helping radiologists spot overlooked small breast cancers. 
There is also a very big problem with algorithms. The problem of the Black Box. The proprietary nature of many algorithms and data sets mean that only certain people can look at these algorithms. Worse we are building systems in a way where we don’t necessarily understand the internal workings and rules of these systems very well at all.
Black boxes look like this. In many systems we see some of the input and the output. But most is not only hidden its not understood. In a classic machine learning model. We feed in data and apply certain initial algorithms. Then we use it prediction or classification. But we need to be careful of the consequences. As Cathy O’Neill cleverly put it Donal Trump is an object lesson in Bad Machine Learning. Iterate on how crowd reacts to what he says and over optimise for the output – Classic problem of Machine Learning trained on bad data set. We need to think about what the systems we’re building are optimising for. 
George Box said that “All models are wrong but some are useful.” Korzybski put it more simply “The Map is not the territory.” And its important to remember that an algorithm is a model. And much as the human mind creates fallible biased models we can also construct fallible computer models. Cathy O’Neill put it bluntly that “A model is no more than a formal opinion embedded in code.” The challenge is that the models are more often than not created by young white males from an upper middle class or upper class background. It is not that human brains are perfect model makers but we spend a long time attempting to build social processes to cope with these biases. The scientific method itself is one of the most powerful tools we’ve invented to overcome these biases.
As we unleash them on education, (Sarah), Policing (pre-crime in chicago) and health and hiring we need to be aware of the challenges they pose. Suman Deb Roy has pointed out
Algorithmic systems are not a settled science, and fitting it blindly to human bias can leave inequality unchallenged and unexposed. Machines cannot avoid using data. But we cannot allow them to discriminate against consumers and citizens. We have to find a path where software biases and unfair impact is comprehended not just in hindsight. This is a new kind of bug. And this time, punting it as ‘an undocumented feature’ could ruin everything. 
Bernard Marr illustrates this with an example
Hiring algorithms. More and more companies are turning to computerized learning systems to filter and hire job applicants, especially for lower wage, service sector jobs. These algorithms may be putting jobs out of reach for some applicants, even though they are qualified and want to work. For example, some of these algorithms have found that, statistically, people with shorter commutes are more likely to stay in a job longer, so the application asks, “How long is your commute?” Applicants who have longer commutes, less reliable transportation (using public transportation instead of their own car, for example) or who haven’t been at their address for very long will be scored lower for the job. Statistically, these considerations may all be accurate, but are they fair? 
There is an old saying in tech: “GIGO: Garbage In Garbage Out” the risk now it that this will will become BIBO “Bias in and BIAS out”
As we gather vast amounts of data the potential for problems increase. There can be unusual downstream consequences also the opportunity to create perverse incentives. We are embedding sensors in cars, and looking the idea that safer driver will be given better rates. The challenge is that personalised insurance breaks the concept of shared risk pools, and can drive dysfunctional behaviour. Goodhart said “When a measure becomes a target, it ceases to be a good measure.” We had a significant recent Irish example with crime statistics where the CSO pointed out problems with both the Under-recording by police of crime and the downgrading of a number of reported crimes. 
At one level I see our future as a choice between, Iron Man – technology to augment, or Iron Maiden – technology controlled by a few that inflicts damage on the many. Technology to augment or to constrict . Technology changes that threaten the self also offer ways to strengthen the self, if used wisely and well.
It is clear that technology does not self-police. We could cut off the use of phones in cars using technology – so it can’t be used while driving but the companies doing so currently choose not to do so
In Europe we have our own bill of rights – a charter of fundamental rights enshrined in the Lisbon treaty and it guarantees “Everyone has the right of access to data which has been collected concerning him or her, and the right to have it rectified.” This right has been used to challenge the export of data from the EU to the US under the Schrems decision of the European Court of Justice. 
My belief is that we need to extend these rights in the algorithmic era. We need to create a “Charter of Algorithmic Rights” For our algorithmic age. Not a Magna Carta which really just enabled the lords against the king without much for the the peasants. We need algorithmic rights, of the people, by the people and for the people.
Simply put we need airbags for the algorithmic age. For decades cars have safer for men than women because the standard crash test dummy tests on male size standard and biases the development of safety towards the average male. As I said, technology is not self policing. 
We are going to have to create better tools. We need to be able to detect, and correct bias and to audit and ensure fairness over a simple move to efficiency. Or else we are tying things together in unforeseeable ways that can have profound consequences at the individual and societal level. Tools such as Value in Design and Thought experiments help. But we need to go much further.
Kate Crawford writing in Nature says
“A social-systems analysis could similarly ask whether and when people affected by AI systems get to ask questions about how such systems work. Financial advisers have been historically limited in the ways they can deploy machine learning because clients expect them to unpack and explain all decisions. Yet so far, individuals who are already subjected to determinations resulting from AI have no analogous power.” 
While this is necessary I don’t believe it’s sufficient. We need a “Charter of Algorithmic Rights“. While looking to the opportunities they can afford we need to recognise the biases and limitation of technology. What appears to be augmentation may not really be the case. It may restrict and rule rather than enable.
We need to ensure that are tools are creative and reflect the diversity of human experience.
We are better managing them than being managed by them in our algorithmic future.
 Computer errors allow violent California prisoners to be released unsupervised can be found here and the story on the app based vibrator is here.
 One link to the Sarah Wysocki story is here for more details read Cathy O’Neills excellent book “Weapons of Math Destruction” or take a look at Cathy’s blog.
 Original Link was Tweeted by Simon McGarr. The piece is here http://www.ippn.ie/index.php/advocacy/press-releases/5000-easier-to-trace-cattle-than-children
 How an Algorithm Learned to Identify Depressed Individuals by Studying Their Instagram Photos https://www.technologyreview.com/s/602208/how-an-algorithm-learned-to-identify-depressed-individuals-by-studying-their-instagram/ and https://arxiv.org/pdf/1608.03282.pdf Everything we know about Facebooks mood manipulation http://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/
 http://www.cancernetwork.com/articles/computer-technology-helps-radiologists-spot-overlooked-small-breast-cancers Neural Nets may be so good because they map onto some fundamental principles of physics http://arxiv.org/abs/1608.08225:
 Trump as a bad Machine Learning Algorithm https://mathbabe.org/2016/08/11/donald-trump-is-like-a-biased-machine-learning-algorithm/
 Genesis of the Data Drive Bug https://www.eiuperspectives.economist.com/technology-innovation/genesis-data-driven-bug
 Bernard Marr The 5 Scariest Ways Big Data is Used Today http://data-informed.com/the-5-scariest-ways-big-data-is-used-today/
 What is the new Central Statistics Office report on Garda data and why does it matter?
and CSO (2016) http://www.cso.ie/en/media/csoie/releasespublications/documents/crimejustice/2016/reviewofcrime.pdf
DRI welcomes landmark data privacy judgement https://www.digitalrights.ie/dri-welcomes-landmark-data-privacy-judgement/ and Schrems v. Data Protection Commissioner https://epic.org/privacy/intl/schrems/
 Why Carmakers Always Insisted on Male Crash-Test Dummies
 There is a blind spot in AI research Kate Crawford& Ryan Calo
I started a new role and helped fund some great companies.
Worked with some great founders and some really amazing colleagues.
Spent time with people. Some of whom I met through Twitter and met physically for the first time in 2016.
Spent time in Hay on Wye in the glorious sunshine with some good friends.
Helped a smart man run a Coder Dojo program.
Spoke at some conferences Úll and Predict and went back to Congregation.
Read some books but not enough.
Ran too little.
Talked to a robot.
Buried a Guinea Pig.
Waded a bog.
Drank tea with many good people. And realised I need to drink tea with many more.
Warned people about a vampire.
Fought battles. Won most.
Ate breakfast overlooking the sea in Baltimore.
Walked where the Tuatha De Danann fought.
Played board games with some wonderful people.
Broke bread with others.
Watched the sun come up and go down in Dublin and Wicklow and Kerry and Cork and Mayo and Galway and Tipperary.
I said goodbye to some people. None of whom was famous but some of whom had a very big impact on me personally.
Walked with my children as they grew and flourished, saw them learn and sail, and shout and smile and cry and grow.
Loved my wife and realised how lucky I am.
Here’s to 2017
To more running, more friends and more family memories,
To hopefully fewer guinea pig funerals, and fewer other funerals,
And to breaking bread, and a chat and a cup of tea with you.
I’ve just taken up a role as Venture Leader in NDRC. My 11yo describes it as “being a talent scout for technology” and a friend called it “being a matchmaker”. I think the combined description is a good start.
As a note to self I’ll remind myself of something I borrowed from Rowan Manahan and quoted first 6 year ago. Still true. Still part of the plan
The contrast caught some people by surprise. So I thought I’d expand on my thoughts. Sometimes more than 140 characters are needed. I love Twitter. It is possibly my favourite piece of technology and I say that as someone who has lived, eaten and breathed technology for more than 20 years. It can be a glorious human sensor network reflecting the pulse of the planet. A learning library, a pub conversation a place of humour or enlightenment. And a wonderful feeding point for information junkies.
I was having a quick flick through my timeline on Friday night when I saw news of what was happening in Paris starting to emerge. And then I tweeted that thought. And then I turned twitter off.
I turned it off because I have no one close in Paris. There is no information I needed. Nothing I can do to add to or help anyone in Paris and nothing I can add to the story. There is a small irony as I the first defining moment for me on Twitter was the Mumbai bombings in 2008. Up until then it was an interesting chat room that competed with Jaiku for attention. Mumbai underlined the power of Twitter for me. I find much of my news through twitter and a path into much emerging technology and many interesting conversations.
In 2010 I joined Mark Little in Storyful. I knew a lot about technology and very little about news. I worked with some of the smartest journalists on the planet (Mark Little, Gavin Sheridan,David Clinch, Markham Nolan, Malachy Browne, Aine Kerr and many others) and worked in a company that helped define how journalism and technology and social media can and should interact.
I remember being in the Storyful offices the day of the Utoya killings. Hearing the first report of a bomb then of of shootings. I watched the Arab Spring unfold, and the Syrian War start. Watched shootings in the US and occasional cat videos as well. There is a visceral almost primal energy in a newsroom during a major breaking story. Journalists play an important role underscored by Ruth McAvinia’s tweet
@dermotcasey I hear you. Although news organisations need circumspect careful people most of all during nights like this.
And yet I do not miss those moments. Because of the human toll. The people who have died or been injured. I do not miss having to put in place psychological training for journalists and processes and policies for staff who see something disturbing in a piece of video that they are watching. And there is the noise. I do not miss the noise of agendas and ideologies attempting to twist every event to their own ends. Paul Bernal summed up many of my thoughts perfectly writing
The aftermath of the events in Paris has shown many of the worst things about the current media and social media. I’ve been watching, reading and following with a feeling, primarily, of sadness. What depresses me the most – and surprises me the least – is the way that the hideousness has been used to support pretty much every agenda.
All I can do is sigh. And feel more sadness. I see the points that everyone has. And yet all I feel is sadness. There isn’t an easy solution to any of this. There aren’t easy answers. There really aren’t, no matter how tempting some of the ideas might be. I wish there were.
We have a situation where even Donald Trump is misquoted (or quoted out of time). Most of this adds noise and not signal. We need more reflective thought along the lines of Zeynep Tufekci (read her timeline and more) and Kenan Malik
I took the kids to Disneyland last year. I was reminded of that experience today. (I skipped that last few Websummits and it’s grown a wee bit since the first one in Bewley’s Hotel in Oct 2009 and the ones in 2010/2011). More or less tongue in cheek
Lots of money spent on AI and figuring out how to manage queues still means “there are lots of queues.”
The food is overpriced. (Websummit have better overpriced food but it’s still well overpriced). Like Disney bring in your own food (especially if you’re a struggling startup) or go outside for food. Base Pizza will run you €11.50 for best pizza in Dublin and a drink. Or soup in Insomnia even cheaper.
You’ll spend a lot of time on your feet walking from attraction to attraction. Its not quite Walt Disney Studios / Disneyland back and forth but its not far off.
The good talks/rides are too short. You’re just starting to enjoy them when they’re over…….
Three days is enough. There’s only so much you can take. While it can be great fun eventually you’re in need of something more substantial. And the same goes for Websummit.
The best map of the venue is on paper.
You need to pace yourself. It goes from pre-summit breakfast events, through the summit itself to lots and lots of parties. In Disney it was get in early, take a break in the middle of the day and come back refreshed. Some variation of this is probably a good summit plan too.
Mickey Mouse makes an appearance. OK no Mouse there was someone with a Unicorn hat.
The kids get very excited about the whole thing. And the adults get very exhausted but they go anyway. (That’s a marketing win for both Disney and Websummit )
You have to smile at the fireworks. This year they’re spectacular.
If you have two and a half minutes take a look at the video below on Bronze Age Orientation day. It’s short, it’s funny and it’s true.
It underlines that the concept of change is not unique to the our digital age. No Company ever likes to transform itself (if the caterpillar had a choice, would it become a butterfly?) but in most cases they have learned to accept the fact that they have to.
Companies have been dealing with mergers, acquisitions, buyouts, restructuring, de-localisations, re-localisations for a long time. If you’ve been through it you’ll know this sort of radical transformation is slow and painful: the all-or-nothing approach forcefully pushed by many consultants is not popular and also falls flat a lot of the time. The Digital Age is here. It’s been coming for twenty years and to succeed in this environment every business needs to be a Digital Business.
We’ll be running a daylong course at the end of the month on Digital Change and how to manage it in your organisation. Our approach is gradual, offers numerous intermediate steps, defines clear deliverables and measurements and ties companies to their specific context. It’s Digital Transformation tempered by Change Management methods and experience
This was the fourth in our series of #Candid talks with the wonderful Mary Carty on the 22nd of October. Rowan Manahan who was in the audience kindly did this guest post for us.
The Craft of Creation – Mary Carty
A #Candid talk on by the ever-candid, crafty and bogglingly creative Mary Carty quickly evolved into a high-energy dialogue about curiosity. Mary pivoted the talk early onto that subject and immediately challenged the room to think about genuine curiosity and to ask ourselves why aren’t certain questions being asked in our world.
She shared with us the question that she and Anne-Marie Imafidon asked themselves last year – “Why are there so few women in tech?” More importantly, they asked themselves, “What are we going to do about it?”
And thus Outbox Incubator was born. That story of 115 young double-X chromosome geniuses coming through one big house in London is familiar enough to those who followed the rise of the Outbox Executives last summer. Mary took us behind the scenes into the world of “Fun. Free. Food” – which were the pillars on which Outbox was built.
Suffice to say to ensure the smooth running of any future enterprise involving young women from the ages of 11 to 22, always ensure that there is a quiet, get-away-from-it-all staircase and a bounteous stash of hot chocolate and Mars Bars.
Some of Mary’s other notable questions:
“Have you ever seen a panel of VCs all smiling, all at once, all day?”
“Why don’t we push against or find answers to the known unknowns?”
“Why don’t you give yourself permission to be curious, really curious?”
Take it from a woman who has created some pretty cool stuff – we can create what we want to create, we just have to make the choice to be curious enough to start …
Thanks to Rowan for his take on Mary’s event, it was really powerful.