We Were Promised Jetpacks: More Thoughts on Creativity and AI

With every new technology there is a race to predict how it will impact the future. Generally, I have noticed that these predictions take two forms, the “jetpack form” and the “better buggywhip form”.

The “jetpack form” is that the new technology will instantly generate stuff you can neither imagine nor see the value of. Like the predictions of jet packs when I was a kid. Sure, they looked cool, but do I really need that to get around? Even as a child they seemed wildly inefficient and kind of only made sense if our world was on the verge of a cataclysmically revolutionary change so vast and profound that nothing I understood today would make sense afterwards, and vice versa. It’s sort of “magical thinking” promising us a technology that would make everyone and every thing taller, smarter, sweeter smelling, lovelier.  And while that in itself is a fairly terrifying concept, at least the sheer comprehensiveness of the promise made some sense of strapping a rocket onto my ass and flying around the neighborhood. Otherwise, not so much.

The ”better buggy whip form” is significantly simpler and it’s that this new technology will do what you already do but do it faster, better, cleaner or something-er. It’s an incremental advancement to the existing. And if its promise is less inspiring, at least one can see its application. It trades “oh wow!” for “oh, okay that makes sense”.

But both of these forms are often wrong because each leans too far towards a particular pole in the equation. The former is too enamored of the technology and believes life will not merely conform to it, but will be radically rebuilt in order to accommodate it. Conversely the latter is too manacled to the present to see how anything can have any value whatsoever that does not apply directly to the “right now”.

What I think both miss is the human element which is the glue that links the technology to the present. Humans will use the technology to alter their present – but they will do it in ways they frankly cannot imagine yet because they don’t understand what the technology can do. How could they? They’re not experts in the technology. They’re barely experts on the present, something they have significantly more experience with.

All of which reminds me of the unfortunately apparently apocryphal Henry Ford quote – “If I’d asked people what they wanted, they’d have said ‘a faster horse’.”

What Ford – or whoever came up with it (let’s pretend for the sake of efficiency if not accuracy that it was Ford) – meant was that people think in terms of “what is” and that “what is” is based on their necessarily circumscribed understanding of the world, and more specifically, of the current solution. But people like him – and, for what it’s worth, Steve Jobs – don’t think that way. People like him are focused on the problem.

A solution exists because people had a problem to solve. The issue with the way most people look at a new technology is they think about it only in terms of the current solution. Which is why you get the “jet pack” or “better buggy whip” alternatives – solutions that either throw the baby out with the bathwater (jetpack) or opt for merely incremental change (better buggy whip). Both are based on the current solution. But people like Ford could think back past the current solution, in a sense, to the original problem that the old technology was solving – to understand how the new technology would 1) solve it better, but more importantly, 2) open the door for a lot of other things that the old technology could not do and that people weren’t aware right now that they needed but boy would they be soon.

So sure, a car was a faster horse. But it wasn’t about just “being a faster horse”. It was about mobility. Mobility was the problem a faster horse solved, but that a car solved better. A car provided more mobility in more ways and provided it to more people – and that last part is extremely important in a country that was literally founded on the idea of mobility.

Now, the only reason I’m telling you all this is because I feel like I’m seeing the same kind of “faster horse” thinking when it comes to AI.

People characterize it either as a jetpack or as a better buggy whip.

They say “This will utterly change life as we know it. You will lose your job and get another that you can’t imagine right now and then probably lose that too. Everything will be infected with some form of AI. You can’t imagine what any of your interactions with anything will be like because the future will be sooooo different. So stop trying. Just stop it.”

Or they say “Imagine – all the info you’ve been looking for, at your fingertips almost before you finish asking for it! A first draft before you even know what you’re writing about! A finished draft maybe even sooner if your standards are low enough! And more versions of anything than anyone could possibly look at in a thousand lifetimes, let alone produce!”

Both analyses characterize AI in terms of the current solutions, not in terms of the underlying problems those solutions were created for.

Look, AI offers us the opportunity to make cars

Let’s not waste it just making faster horses.

Milestones: Some Thoughts on Creativity and AI

There’s this great story that the jazz pianist Herbie Hancock tells about playing in the Miles Davis Quintet in Stuttgart when he was young:

“We started and everything was fine, and I remember that we were playing... Tony Williams was playing drums, Ron Carter bass, Wayne Shorter saxophone. And it was a really hot night, the music was tight, it was powerful, it was innovative and fun. We were having a lot of fun and... right in the middle of Miles' solo, when he was playing one of his amazing solos, and I'm trying, you know, I'm in there and I'm playing, right in the middle of his solo I played the wrong chord—a chord that just sounded completely wrong. It sounded like a big mistake and I did this and I went like this: I put my hands around my ears and Miles paused for a second and then he played some notes that made my chord right. He made it correct, which astounded me. I couldn't believe what I heard. He was able to make something that was wrong into something that was right, with the choice of notes that he made, and that feeling that he had.”

Jazz fans will tell you that this is an illustration of what Miles meant when he said “the next note determines whether the note you played was wrong”.

Herbie’s chord was “wrong” in the key they were playing in, but it was “right” in the key that Miles shifted the whole song to by playing the notes he played. (My classical musician son had to explain to me that keys may be different but they can share the same notes. So while in the context, say, of the three chords that Herbie played, that last one was not in the key and therefore “wrong”; in the context of the note that Herbie played before it and the note Miles played after it, it was “right” – it was just in a different key.)

Which I think has a lot to do with how we should look at AI and what creatives can do with it.

AI, as I understand how it is currently constituted, uses really complex math to give us “the next most likely answer” based on the information it has at hand. Think of this as giving us “the next most likely note within the key we’ve been playing”. In that sense, it can’t give you a wrong note – it can only give you the notes based on the notes you’ve told it you’re playing. (What about “Hallucinations” – those freakish aberrations that luddites and critics of AI like to point to with derision and glee, you ask? Hey, if that stuff is in the data you’re feeding into the math, then it’s in the key, as it were. So It’s not “wrong” – even if it’s horrifying)

So AI can’t give you a wrong note, as it were - only humans can (at least right now). And this unfortunately is where a lot of people stop when they’re talking about AI. They say “AI will be able to do everything we humans can do, faster and better and error-free (barring hallucinations, of course), so it will take our jobs and consign us all to the dustbin of history.” Well, those of us not in charge of deciding whose jobs AI can do faster and better, I suppose.

But it is a mistake to stop there. Just like it was a mistake for Herbie Hancock to stop when he hit the wrong chord. Because AI also can’t make that wrong note into something remarkable, like Miles did. Like every musician does when she changes keys. Because changing keys (metaphorically speaking) is not in the math. It’s in us. It’s in our humanity. Hell, AI can’t even make the wrong note that creates the opportunity for something wonderful. It can only give you the next most likely note in the key. Which is fine. It’s just not wonderful.

Later in that same story, Herbie says:

“…the only way we can grow is to have a mind that is open enough to be able to accept situations, to be able to experience situations as they are and turn them in to medicine. Turn poison into medicine. Take whatever situation you have and make something constructive happen with it.”

The mistake right now is that people keep looking at AI as if it will help us play more right notes faster. That it will make us more “efficient” – because this is always the expectation of new technology. But invariably it doesn’t make us faster. Quite the opposite. It slows us down because it gives us more options. Options that often reveal paths we never knew existed. And I think that’s the opportunity of AI – for us to figure out how, not just to make the wrong notes right, but to make the wrong notes into something powerful and innovative and fun.

AI – as it is right now – doesn’t do that. But humans, as they are right now, do.

So it’s up to you. What note are you going to play next?

Pearls Before Swine: some thoughts on creatives and AI

A friend who runs an agency called me and was ranting before I’d even said “No I don’t want any…”

“What the hell is it with creatives right now? They’re all clutching their pearls about AI and I don’t understand it. Why are they worried we’re going to sell them out?”

So I told him.

But before I tell you, let me preface my answer with two caveats. First, I, like most people, am not an expert on AI. So take what I say about it with a rather large grain of salt. And second, despite my ignorance (or worse), I am actually pro-AI, if for no other reason than I know that anyone who has bet against the technology has lost. (There are other reasons but that’s a really good one to start with).

But back to the phone call. Because my friend was not wrong. Online, in bars, via emails and text messages, I was hearing it too. “AI is coming for our jobs!” “I’m gonna be replaced by a robot!” and “I’ll burn the industry down before I use it.”

So why? (other than, I concede, that freaking out tends to be our first reaction to things)

Okay, well, let’s start with the unsettling observation that by and large creative in advertising in the 21st century is little more than a commodity. And it has become commoditized over the decades by 1) account people who will do anything to keep clients “happy” and revenues flowing (albeit on 90, 120, 180, and even 365 day pay schedules),  2) media people who view it as merely the interchangeable fodder for carpetbombing digital media plans which they would A, B, C, and all the way to Z test when allowed (“make a blue one” “make a yellow one” “make a yellow-ish blue one”), 3) Senior Management who blithely gave it away in spec pitches (Proving that “That which we obtain too easily we esteem too lightly” is as relevant today as it was in the 18th century),  4) Senior Creatives who chased international trophies over actual results for clients (and were warmly rewarded for their behaviour with the choicest jobs in the industry) and 5) holding companies whose obsession with shareholder value not only squashed profits, but drove HR departments to terminate senior creatives and their salaries and replace them with prematurely elevated (and therefore cheaper) junior creatives who they would overwork and understaff – that is if they could find any who hadn’t be lured to the more glamourous and lucrative worlds of Hollywood, gaming and digital media startups (more on holding companies in a second).

Now, you may disagree with that fairly bleak assessment of our industry but roll with it for the purposes of this essay 

So, if creative is a commodity, merely an interchangeable widget for media plans, which people can give away and has no real impact on business, then why wouldn’t creatives be concerned that something that could produce more of it faster and cheaper would be a threat?

Moreover, why wouldn’t they think that Senior Management and Account Service would flock to it in an effort to give clients more options faster (because “more” is the lazy man’s version of service, and faster the mediocre man’s substitute for quality)?

Indeed why wouldn’t that actually be what service means to generations of clients who were raised during the commoditization of creative to believe it’s what advertising actually is?

And if using AI had the added benefit of further cutting agency costs to meet the ridiculous financial demands the holding company was, um, holding  them to (so that’s why they’re called that…) and created a little more room in the budget for them to keep their jobs (after, of course, they “passed the savings on to the client” in a furious race to the bottom) – well then they’d actually be stupid not to do it.

Plus, honestly, if creative really is a commodity, why would anyone in their right mind spend one penny more on it than they absolutely needed to?

And yes, a creative just said that last part out loud.

Because the name of the game at many agencies – and the holding companies who milk them for all they’re worth – isn’t really about driving sales for the client; it’s about making lots and lots of ads to be every possible place the potential customers might possibly be to create the illusion that they’re doing something for the money they’re being paid 

And AI is really good at that. Really really good at it. And creatives know it.

So that is why they are, as my friend so archaically put it, clutching their pearls.

Now, there’s one problem with all of this, and it’s this: More of what I don’t care about doesn’t make me care about it.

That is, you can show me a million ads for, say, flan, across every media touchpoint I have – every tv show I watch, every billboard I pass, every app I use, every website I visit – and you can do it repeatedly, morning noon and night for as long as I live and breathe and guess what? It won’t make me care about flan. More of what I do not care about does not make me care about it. I do not care about an ad for Flan because I do not like it, Sam I am. And more of those minutely tweaked ads are not going to make me care about it. They’re just showing up and saying “Here’s flan. Buy it”.

What humans however bring to the equation – what the best advertising has always done in whatever form it has taken for whatever client it has been created for – is to make people care about stuff. Especially stuff they previously did not care about. (And for what it’s worth that often involves something that is somewhat illogical. Something that shouldn’t work, but does. Which isn’t how algorithms built on providing you with the next most logical answer based on past behaviour generally work. Which is how I understand AI to work).

Of course, eventually people will remember this – will remember that using advertising to actually sell things, to actually get people to care about things they don’t care about, is actually the point. And that it, on occasion, works. But when? When will they realize this?

When sales go down and stay down. When companies start to go out of business and are desperate not to go out of business. When the conversations in the how-come room are “Why is the advertising failing?” “I don’t know, it’s everywhere everyone is!” “Wait. Maybe advertising is not about banners and billboards and social galore. Maybe advertising – perhaps – means a little bit more.”

But it won’t happen right now. Right now, clients will flock to it and agencies will provide it because, well like I said, why would they not? And zillions of ads will go out and they won’t work because they’re not focused on making people actually care, so no one will, so no one will buy, so companies will go out of business and won’t be able to pay their agencies that made these ads for them, so those agencies will go out of business too. But the CMOS who made those calls and the agency heads who facilitated them? They’ll be long gone, having ridden their publicity about using AI into better gigs long before the shit hit the fan.

And this feels like a good place to remind you, dear reader, of what I said before; that I’m pro-AI. I think it’s actually a good thing. Do I have a death-wish? A quotient of self-loathing that would make a Mets fan blush?

No (well, yes, but not where AI is concerned). I continue to think that AI will, eventually, help us achieve things we can’t imagine achieving today as I’ve written about elsewhere.

But that’s all well in the future. Which is a long way off. Right now things are different and right now my friend who runs an agency is on the phone asking me why creatives are worried that agencies are going to sell them out for AI.

“They’re worried, old boy, because you will.”

Out of Sight, Out of Mind: Culture and the Return to On-Site Work

Finally, our long national nightmare is coming to an end. No, not that one. No, not that one either. The “work-from-home” one. At long last people are returning to their offices. And why? In order to finally do the work they’ve been ignoring since March of 2020? No, all that work got done.

Because of culture.

Apparently companies – advertising agencies among them – are “asking” their employees to come back into the office because the culture of those companies is dying. Not the work, not the profits, not the, um, whatever else there is. But the culture.

Now, there’s always a lot of talk around culture at advertising agencies, and that’s for two simple reasons. The first is because agencies, determined to convince clients that they can make them distinct in the marketplace, are desperate to show that they are distinct too. And they do that through culture. “Look, we have a foosball table! Look, we have bean bag chairs!” Okay, bad examples

And the second? Well, that applies to pretty much any company; it’s because people work with people they like - and more often than not, people they are like – and culture is in many ways a natural expression of the people in the office. If you like the culture you’ll probably like the people. If you don’t, you probably won’t. And if you can’t tell what the culture is? Well …

Now this culture is an expression of the agency because, it would seem to me, it grows out of the work we do together. If nobody cares about what they’re doing, if everyone’s just going through the motions, well then your culture probably reflects that. If the work is produced via a nightmarish clash of titanic egos, in which entitled bullies terrorize a nameless horde of navvies, slaves and drones – well then, I’m guessing that’s gonna come across at the Christmas party too (hopefully there’s an open bar. Or maybe not).

That’s also why it binds the agency together. Because its how we do what we do. And that “how”, as far as I can tell, operates along three distinct vehicles – shared experience, legacy and familiarity.

“Shared experience” has to do with what we do together (duh). When an agency all does something together, we become entwined in each other’s personal stories in a way that is more lasting than work. It’s actually kind of fascinating and it creates a kinship beyond simply “we all work in advertising” or “we all work at agency x” or “we all work on the Y account”.

“Legacy” has to do with the creation of tangible things. An experience lives only in the mind, but you can literally see a tangible thing, and that can remind us during dark, culture-less days, of who we are. Additionally, its strength is that it lives outside of the participants as a sort of icon or expression of the culture. It can even live beyond the lives of the participants as a sort of talisman – “Look, this is the standard. To do things like this. This is who we are.” To remind participants, as well as subsequent generations – and even those not in the culture – what the company is.

“Familiarity” is about the easy exchange of expertise, information and knowledge. It’s the strength of “many-to-many connections” and the bonding opportunity is particularly helpful connecting people who don’t normally share their expertise because they’re in different departments or different strata or whatever. Indeed it is these connections – as Gillian Tett explains in The Silo Effect – that can save companies from becoming siloed and doomed, whatever their culture.

And just as each of these vehicles can communicate the culture of the agency in terms of work, it can communicate it beyond work. A shared experience can be a client presentation or it can be an outing to a baseball game right? Legacy can be the awards won for the work or photos from holiday parties. Familiarity can be expertise about the client’s business or about music or films or just life in general.

And I can see where certain people might feel like the transference of that culture from “work” to “non-work” can only happen when we’re all breathing on each other. Especially because a lot of that transference probably happens passively, and work-from-home, as I’ve written elsewhere, isn’t great at passive action.

But here’s the other thing about these vehicles – none of them require you all be in the same place at the same time.

Can you generate “shared experiences” without all being in the same place? We do it every year with the Super Bowl and the Oscars. Can you create a legacy piece if you’re not all in the same place? Playing For Change has done it with people literally all over the planet as have school children with the Flat Stanley project. Can you become more familiar and share knowledge over long distances? I’m sorry, I was just checking my socials – what was the question again?

Hate these examples? Great! Think of your own. As advertising agencies we are literally in the business of figuring out how to do all of these things to create culture when we’re not in the same room as the customer. That is our job. Why is it not then incumbent upon us as managers at advertising agencies to figure out ways to make culture happen regardless of where we or our people are? To say that we have to be in the same room with people in order to have an impact basically discounts the efficacy of every tactic we sell to clients - except potentially for event marketing.

Look, if you want to be in the office because you like being in the office, because you like commuting, because you like riding in elevators and using coffee machines, then by all means, be in the office. But don’t tell me it’s about culture. That excuse is disappointing and disheartening and is enough to make one think that culture isn’t really the reason they want you to come back in after all.

Keep the Change: Some Thoughts on Small Business Banking

As part of our ongoing efforts to make sense of what the hell is going on in the worlds of our clients (and ourselves), LevLane commissioned some research on what small businesses were looking for from their banking partners (you can read more about it here). A couple of things got me thinking.

One had to do with the statistic that 71% of the respondents felt there was a growing gap between what banks think they need to provide to small businesses and what they were actually providing.

Now, of course, people always think service is worse now than it was. But even with that caveat, the word “gap” is intriguing. It implies a dissonance of expectations. That the banks think what they’re offering is satisfactory and that their customers think it’s not.

One explanation is that this could just be a matter of degree. With more things being automated, with the rise of chatbots and the constant drumbeat of cost-cutting, small businesses may feel that the level of attention and thoroughness and service has diminished  – “you’re no longer holding my hand all the way through the transaction, just halfway” for example. Perhaps. I think it might be something else.

Businesses small and large have changed in fundamental ways since Covid. The idea of hybrid work, for example, was a wild pipe dream on the outer fringes of what employers, at least in the US, were seriously considering offering their employees. Now, it’s literally the compromise to employees who have been working remotely for the past couple of years (which wasn’t on ANYONE’s radar in 2018, by the way) and don’t want to come in at all. How is that affecting small businesses, what they expense, what they have to spend money on, what they can write off?

Or how about something as apparently innocuous as curbside pickup? Once available at the kinds of shops that only Thurston Howell III shopped at, now it’s standard operating procedure for every brick-and-mortar from public libraries to Walmart and Target. Does this require different kinds of employees, different training, different parking lots and building configurations for storing the goods to be picked up? And speaking of “picked up” – what about restaurants, who saw their business crater until they figured out that they could just about replace their dine-in revenue with delivery and takeout – which they are not going to give up now that people are sitting down in their restaurants again.

I’m certain that every industry can point to similarly significant disruptions and expenditures that they incurred – and are continuing to incur – because the nature of their business (that is, the way people like you and me want to buy goods and services from them) has changed.

So if their businesses have irrevocably changed, isn’t it entirely possible that their banking needs to meet those fundamental changes have changed just as radically too?

But have banks acknowledged that? Have they adapted? One thinks that if they had, 71% of respondents probably wouldn’t be complaining of a growing gap between what they need and what they’re getting.

Businesses tax burdens might have shifted, their revenue streams might have changed, their personnel and the actual jobs they hold may be profoundly different than they were before Covid – all of which may have significantly changed their financial exposure. Have their banks adapted to any of these things – and more? Have they sat down with their clients and done an assessment, compared their needs now to 3 years ago? And not just a “so, how are things different?” but an informed comparison, one that helps the business think deeply and objectively about what’s different now and what help they need? So they can actually do something about it, and not just have this vague sense that, somehow, there’s a gap between what they need and what their bank is giving them.

And to be clear, I am not blaming the banking industry for this. There’s a significant element of “boiling the frog” at play. Businesses responded to the disruption of Covid on the fly, in real time, manufacturing stop-gap solutions, some of which became permanent, some of which did not (did any of us really think that sidewalk dining in December under reconstructed tents was going to last no matter how many space heaters they pointed at us?)

Businesses and their solutions evolved over time. Which is why this is a great time for banks to get on the front foot and make that assessment. To sit with their small business clients and say “okay, here are the services we were providing to your business before Covid. And here are the needs your business had before Covid. What’s missing from the first list now? What’s missing from the second?”

And if companies feel that their businesses have changed on some very fundamental levels, then why wouldn’t they assume that their bank’s business has changed on similarly fundamental levels? Ways that might not be to their advantage. And this is the second concern that statistic raises. Are they still as solvent as they were? Is their cash flow still strong? Are they over-exposed in ways they never would have been had Covid not happened? Has digital and online banking become so much more important now and is their bank robust in those areas?

And we’re not even talking about other non-Covid events that may be on their minds – like say, the impact of high interest rates or bank closures. We’re only talking about how businesses function differently post Covid and how that has impacted how banks do business with those businesses. (of course, layering those other things on top certainly don’t lessen concerns).

That’s probably why more than half of them said that “more help” from their bank would make them feel like it was a “trusted partner”. But hold on.

If small businesses businesses have changed because of Covid (which we as the customers of those businesses have definitely seen to be true) and if small businesses feel a “growing gap” between what they’re getting and what they expect from their banks – a gap that would be closed significantly if they offered services that better align with this new world order – then how can small businesses really trust that their banks are helping them prepare for, or prepare against, the next big Covid-like thing that’s coming – whatever that may be?

Or said another way, if their banks haven’t adequately responded to what is in these small businesses rear view mirror, how can they believe they’ll help them with what’s coming down the road at them? What’s coming down the road that they can’t see because they’re too busy dealing with the day-to-day of, you know, running their businesses.

Curiously, this is the real opportunity for banks that this survey identifies. The opportunity to be the forward-thinking economic resource for their customers. A resource that can aggregate observations from across the industries of their many other clients to give a local perspective that complements the national perspective on threats and opportunities that they can get from others. Which can in turn, become proprietary observations and information that will help them attract more clients who will find it more valuable than ever in a post-Covid world.

The banks that start doing this now will be able to close that gap more effectively and faster and earn the trust of their clients sooner than their competitors. Which is a competitive advantage that may make all the difference in the coming years.

For them and their clients.

Why Can't I Be You?: Some Thoughts on Branding

I was trying to help a client develop a new brand in the food industry. For a product that he was still trying to perfect. With a production process he was still working the kinks out of. And in a category that, if we’re being honest, sort of didn’t exist yet. So there was a lot. 

And because of all of those variables, the brand we were developing, the voice and tone we were talking about for this “still to be perfected product in a non-existent category,” was going to be a little unusual. Okay, a lot unusual. Because it had to be. It had to be because otherwise it wouldn’t stand out at retail. You’ve stood at the shelf in a grocery store or a bodega. You’ve seen the thousands of products screaming at you – and that was when you knew what you wanted. This was something that nobody knew about yet. In a category that, like I said, sort of didn’t exist yet. And because it was a startup, it certainly wasn’t going to have the cash behind it for advertising and marketing that competitors like KraftHeinz or Campbells or Mondelez count on. So really, the shelf was our only battlefield.

Which meant we really had to have a personality on shelf that was attention-getting – because if it didn’t cut through there, we were dead.

But it also needed to be distinctive for another reason, I told my friend.

Because it was.

He was confused. So I told him a story.

Marc Maron is a famous comedian with a famous podcast and he told a story once about when he wasn’t so famous. Not unknown – he had fans and he’d been in movies and on the late night shows and he made a living – but he was certainly no Kevin Hart or Dave Chappelle or Jerry Seinfeld.

“And he was doing a show one night and the club was pretty full of fans who had come to see him, who knew his kind of humour and who made a point to come out when he was in town. And a few minutes into the show, a couple of couples were seated near the stage. No big deal. But after a few jokes he could tell that they had no idea who he was. He could tell that they had said “Hey, it’s Saturday night, what do you want to do?” “How about we go to a comedy club?” “That sounds fun!” Which is fine, no harm in that at all. But he could tell as he was doing his routine that his jokes weren’t really their thing. And to be clear, they were good people, they weren’t assholes. They were trying to dig it. But it just wasn’t their kind of comedy.

“And so Maron stopped the show and turned to the couple of couples and said ‘Hey guys, look, you seem like nice people really, so I’m gonna level with you. What you’ve seen for the last ten minutes? This is pretty much it. I mean, it’s gonna be like another hour of this. It’s not gonna suddenly turn into Carrot Top or Kevin Hart or whatever. It’s gonna be this. A lot more of this. So if you wanna jet, that’s fine.’

“And when he said that, he suddenly realized, as obvious as it sounds ‘Oh, I’m not for everyone.’ And to be clear, this isn’t ‘No one likes me’ or ‘I’m no good’. It’s something entirely different. It’s that people like different things. Some people – like all the folks in the audience EXCEPT the couple of couples who came in late - they got him. They got his humour because it was their humour. They thought he was funny. But some people didn’t. And that’s fine. He’s just not for everyone. And it was a real epiphany for him and I think it changed his life in many ways.”

And when I related this story to my friend, his eyes lit up and he nodded enthusiastically and said “I get it! You have to have the courage to be who you are! Your brand has to be brave! Has to be heroic! You’re exactly right! That’s great!” 

Now, loathe as I am to disagree with a client when he’s agreeing with me and telling me I’m great, I sorta had to here (really, it’s amazing I have a career at all).

“Well, yeah, but no. See, I think with marketing there’s often a lot of talk about ‘You have to have courage! You have to be strong! You have to be fearless!” or whatever. And while yeah, wishy-washy is bad, there’s more to this stuff than just telling some poor brand manager whose sales are tanking that he has to be more entrenched, that he has to be more tenacious, that he just doesn’t have enough god damned backbone.

“For me, Maron’s story is so great because it reminds me ‘You have to be what you are, not because everyone else is an asshole (though, yeah, probably) but because really, that’s all you can be. Because that’s what you’re gonna be. Some folks will dig it and some folks won’t and your job is to spend your time seeking the former out and not wasting time on the latter. And if the latter come around at some point, great. But they’re not likely to come around because you tried to twist yourself into something you’re not. They’re probably going to come around because they finally got what you were. No one became a fan of the Rolling Stones because they were ‘the new Beatles’. They became a fan because they got who the Rolling Stones were. Same with Maron. Same with your brand.”

And I get that it’s hard for brands and their brand managers who have CMOs, distributors, holding companies and more saying: “This is how we do things” and “Look at this case study!” and “Why can’t you be more like your brother” (oops). But that’s why it’s so important.

Because while it’s easier to just be a clone of someone else – the template is already there – ultimately you’re not only not giving people a reason to buy you, you’re actually reinforcing the reason they should buy someone else. At best they’re gonna say “I love that – it feels so [insert brand here]” and then go buy that brand.

So don’t be the new Apple. Don’t be the new Oatly. Don’t be the new Liquid Death. Instead, dig deep into what you are, what makes you unique, and how you uniquely meet some need that maybe no one has ever figured out even existed before – and then be the hell out of that.

Because for what it’s worth, you really don’t have any other choice.

The Past and Pending: Looking at Flair.ai

I ran into Mickey Friedman’s graphics program flair.ai at Noah Brier’s first BrXnd conference and at the time I was intrigued by it because it (like so many things at that conference) seemed to be showing how AI could be used in a practical way to help advertising agencies, or at least the creative departments. And so, following Noah’s admonition to tinker, I did.

Now, to be clear, this is not go ing to be a “review” of flair.ai. On the one hand, my background is as a writer, so I’m hardly qualified to talk about the nuances and challenges of a sophisticated visual design program. And on the other hand, I’m an Executive Creative Director – so I’m not really qualified to talk about anything when you get right down to it.

That said, flair.ai is a platform that uses Artificial Intelligence to create quick and simple product table-top shots digitally. Got a boring photo of the craft beer you’ve been brewing in your basement? Now you can drop that image into flair.ai’s many scenes and  – PRESTO – the crummy little photo you took on your kitchen table now looks like it was styled and designed and lit by people who actually knew what they were doing.

Here’s your beer on a snowy mountaintop, here it is in a field, here it is in some other damn place. But not just backgrounds. Flair.ai lets you customize the scene with pedestals, “elements” (leaves, flowers, etc) which you can place in the image for dimension and depth and style, and even humans in various poses (especially handy if your product isn’t so much a CPG as it is a shirt or dress).

And because it uses AI, you can adjust the lighting, shadows, ripples, everything, so that your friends will think that you really did shoot your beer on the banks of an Amazonian lagoon.

It’s fast, simple, and easy to use. When I played around with the demo I took a quick photo with my phone of the hip flask on my desk (yes, I have a hip flask on my desk, doesn’t everyone?) and it masked it and corrected it accordingly when I moved it from scene to scene (On the other hand, the masking was somewhat less accurate when I took a picture of my glasses, but hey, let’s not look a gift horse in the mouth. You want a complicated image like that to be masked, open up your copy of Photoshop, hotshot.)

It seemed to me – a simple ECD – that this could be really useful for someone who was a small entrepreneur and wanted to flog their products online without having to hire outside resources. Outside resources that would take time and money away from all the ten thousand other things that a startup entrepreneur spends her time on – you know, like actually making the stuff she’s selling.

And that was how LevLane CD RJ Cassi saw it too when I showed it to him – a quick way for someone who wasn’t a re-toucher or even a reasonably talented art director to create lots of digital assets for lots of small CPG goods to appear on something like Amazon.com. Assets that were a step above just a blank product shot (and therefore hopefully more compelling and, dare i say it, "brand building”), and yet still straightforward enough for functional usage.

Which Senior Art Director (and master retoucher) Chris Shea agreed with. “I can already do all of these things,” he said, “so why would I need this platform?” And with Photoshop robustly integrating AI technology in to its offerings, “Why” asked CD Jeremy Johnson, “would I need an outside resource if the current software I use is already going in that direction?”

All valid points for agency folks, and all things that this simple agency copywriter would not have thought about.

But like I said, what if this isn’t for agency folks. What if this isn’t for art directors at all? What if it’s about democratizing table top photography? Isn’t that the point of the internet after all? That we can all do everything?

That got me thinking about the days when personal computers initially made their way into agencies. When they were heralded as ushering in an exciting new world of technological advancement – and decried as spelling the end of creativity as we knew it. When they were seen as competitive advantages that would launch companies into the stratosphere – and as harbingers of doom that would destroy agencies too. When the press was filled with stories of how companies – advertising agencies in particular – were incorporating personal computers into their businesses, cheek by jowl with stories of how agencies were discrediting them as fads and flashes in the pan. It was the best of times, it was the worst of times, blah blah blah.

Thank God no one is saying anything like that about AI.

At ground zero of all of this was the Apple Macintosh. People complained that it wasn’t a “serious” computer, that it was a “toy”; that it wasn’t for “adults”, it was something for “kids”. And that it certainly wasn’t something that had any business being in an office. Literally.

Which is probably why, if you look at ads for the Mac in the mid-1980s, Apple began explaining how this revolutionary little box was going to – was already – changing what was possible for businesses because it was putting graphics and design and art into the hands of any Tom, Dick or Mary who could operate a floppy disk. Indeed, you literally had TV commercials that compared the Mac’s output and speed to that of graphic designers, typesetters and art departments (you can see some of those ads here).

My friend Lance Thomas, Partner, CD and CEO at Origin Agency had the same thoughts “This is all reminding me of the Photoshop craze when it first came out. Every person with a computer was suddenly an Art Director and artist, because it was so easy to manipulate and explore the tools and effects. Eventually it shook out that it was just another tool for the Art Directors really.”

Lance is right, of course, but especially with that last part. Because even though that’s where all of this started, even though it was all sold in as a sort of “democratization of graphic design” that’s not where we wound up, is it? We wound up making things that we could never have imagined when we first cracked open MacPaint. The tools that were built to let any moron with a power suit and hair product decorate presentations and newsletters turned into sophisticated machinery that created art. Machinery that didn’t replace art directors and illustrators and designers, but made them more valuable by taking them to creative places none of us knew existed.

So maybe i’m wrong when I characterize it as something that sort of democratizes tabletop photography so every entrepreneur, hobbyist and packrat can make cooler images for their Etsy pages. Maybe I shouldn’t be looking at it as a finished thing in and of itself. Maybe, as Churchill said during another era of cataclysmic change, “This is not the end. This is not the beginning of the end. This is perhaps, the end of the beginning.”

Dr. Feelgood: More Thoughts on Hospital Marketing

To say that a hospital is made up of doctors is obvious. What is less obvious is the idea that a hospital’s brand is made up of hundreds of sub-brands over which it has absolutely no control. For each doctor is, whether they know it or not (whether they will admit it or not), creating their own brand with every interaction they have with every patient, co-worker, member of the medical community and partner.

I say, “whether they know it or not” because while they would bristle at the suggestion that they would ever stoop to anything as crass as marketing, they will nonetheless make it extremely clear to you what their relationship with their patients is and is not, what they are and are not willing to say or do (because it’s “unprofessional”), who they admire and who they despise – in short, all of the things that those of us in marketing basically use to define a brand.

So, does that mean each hospital functions like a traditional “house of brands”? Nope. And that’s the problem.

See, in traditional “house of brands” marketing there is a synergy between then overarching brand and the sub-brands beneath it. The “overarching” brand provides unity and stability, and usually embodies a promise that the sub-brands can exemplify or prove to their different consumers. The overarching brand – because it’s not tied to any one product – can be more aspirational and abstract than the sub-brands under it, which are by definition very specifically product- or service-based.

“But” you complain, “that’s exactly what hospitals are doing.” No, not really. Sure, they often aim at something aspirational, but it’s usually so ridiculously broad (in the hopes that it will sort of apply to every aspect of the hospital) that it ends up meaning nothing and offering doctors no real path to fulfill. Which, by the way, their patients don’t want them to fulfill. “We’re leading edge! We have top technology! We went to Ivy League schools!” Most doctor-patient interactions have nothing to do with that – which is just how patients want to keep it. “Hello Mr. Jones, your tests came back normal. See you next year.”

So not only do the sub-brands not support the overarching brand, they don’t even know they’re supposed to (not that they would if they did, probably).

And while there are many marketing reasons why this is important, two of the most important business reasons (yes, you read that right; a creative is using brands to talk about “business issues”) is how the synergy of these sub-brands help those running the whole company figure out how to run the company better.

For one of the ways an ABInbev or Unilever or General Motors decides which brands they needed to acquire and which they needed to jettison in order to be more successful is by identifying 1) what gaps they need to fill to provide legitimacy for the over-arching brand’s message as consumers evolve and change and 2) what sub brands were contributing a message that had become at odds with or irrelevant to the overarching brand’s message.

But none of that really happens in hospital marketing. Which is exactly why it’s important to hospital marketing.

Look, people don’t “buy” a hospital any more than they “buy” ABInbev. They buy a beer. They “buy” a doctor. Because the beer – and the doctor – are the interaction they have. In many ways, that beer is the brand of ABInbev and that doctor is the brand of the hospital for that customer. And they will carry that perception of the hospital – driven by their experience with that doctor – with them long after the brand campaign you launched today has been replaced.

So what do you do? Put the doctors on the billboards since they’re the product/brand? No, because it’s not only not cost-effective, it’s not any kind of effective. Do you craft some broad platitude about “care” or “health” that is so bland that its meaninglessness is equally meaningless to every specialty in the hospital? No.

You start by going exactly where most hospital C-suites don’t want you to go. You go to the culture.

Look every hospital – like every organization in every industry (even advertising agencies for god’s sake) – has a culture. It could be competitive, it could be innovative, it could be cheap, it could be anything. But that culture is made up of – defined by - the people within the organization. And in a hospital those people are the doctors. And while each one of them has their own brand (see above) what should happen, what likely happens, is that the Venn diagram for the doctors all overlap somewhere and that somewhere is culture of the hospital. (And yeah, sure, as with every organization, there are some outliers, there are some groups that have unique cultures or cultures that are at odds with the rest of the organization. Whatever. Nothing is perfect.)

But that’s where you start. By understanding the culture, by sifting past the platitudes and the superficialities that most hospitals settle for (because frankly they can do nothing else) to find out what this hospital really is like from the people who work there. The good and the bad. The crunchy and the smooth. Only then can you find something that is not only true but that the doctors and the staff and administrators can believe in. Because they’re already living it. And if they believe it, they can prove it to the public. Because they already are.

It's hard, but don’t worry. It’s not brain surgery.

A Little Bit You, A Little Bit Me: Exploring BrXnd Collabs

A long time ago, in an effort to get my advertising students to think about brands in ways they hadn’t (but also, in a sense, had), I asked them to think about what a car would look like if it was created by Apple. Would it look like every other car? Why not? What would be unique about it? What would it do that other cars didn’t? Would it be more like a minivan or a Maserati? Oh and don’t just tell me that it would have a computer in it – that’s missing the point, you fail, get out and go back to math class.

The point of the exercise was to help the students separate their thinking about Apple from the products Apple produced in order to understand what made something uniquely, you know, “Apple”. To try to put some specifics to conversations that invariably were limited to “their stuff looks cool” or “they’re amazing” or “I just really like them.”

There’s an element of that exercise in Brxnd Collabs, a new experiment from Noah Brier. Noah, you may recall, burst onto the public consciousness with BrandTags back in 2008 which, as Fast Company explained, “invited people to ‘tag’ a brand with the first word that came to mind (example: WalMart = big, cheap, etc.), producing a visual ‘tag cloud’ that offered a kind of shorthand, crowdsourced summary of a brand’s meaning.” And even though it sort of started out as a way to test Noah’s friend’s hypothesis that brands live in people’s heads, it ended up being very revealing about just what those brands meant when they were in people’s heads.

This project” Brier says, “is just many orders of magnitude more.” 

In Brxnd Collabs, Brier uses AI to imagine what brand partnerships could look like. Okay, that sounds dull, let me try again. Because it’s not just two brands “shaking hands” and making nice for the camera, which is what most partnerships are. This is more like you put them into the CERN Large Hadron Collider and flung them at each other at nearly the speed of light. They’re more brand “collisions” than “partnerships”.

The interface is simple - you choose two brands from a long list of choices, and also an “item” that you would like to see them expressed on (which, it occurs to me, brings its own sort of “brand” elements to the mix, but let’s ignore that for now lest our heads explode). You click a few buttons and whammo, your collab request goes into the hopper and the AI starts churning away

Because of AI, the results are some sort of hybrid the two brands in question. Sometimes, exciting and innovative combinations of the characteristics of two very different entities that actually make you re-think some of your conceptions about them. And in some cases, the combinations are just 12 car pileups.

In the former camp you’ve got things like the collab between Brooks Brothers and Wrangler. Yes, both clothing brands, but wildly different audiences and even if the audiences do overlap, wildly different ways of engaging with those audiences. Brooks Brothers – founded in lower Manhattan in 1818, has catered to the elite for over two hundred years, outfitting nearly every president since James Madison, and namechecked in works by F. Scott Fitzgerald, Richard Yates and James Thurber.

Wrangler, on the other hand, is a more 20th century brand, one that was overhauled in the 1940s when Philadelphia’s own Bernard “Rodeo Ben” Lichtenstein was hired to design a new pair of jeans specifically for cowboys. He was so innovative and so successful that by 1974, Wrangler was endorsed by the Pro Rodeo Cowboys Association. In the 80s, Wrangler sponsored Dale Earnhardt’s car, and by the 90s, nearly 1 in every 5 pairs of jeans worn was Wrangler.

Both successful. Both authentic. Both very American. Both cater to a distinct audience. But the two could not be less similar on the rack.

And yet the collab… works. It’s a button-down but it’s a casual button-down. Not something you’d wear with a repp tie, to be sure, but also, not something you’d wear if you were busting a bronc. It very cleverly navigates a sort of middle ground between too formal and too casual – while still feeling authentic and real and American.

And all this is very cool and a really interesting (and arguably, billable) way of wasting an afternoon (or three) because you’re not just looking at these partnerships – you’re actually making them yourself. You’re the one deciding which two brands to connect – which means you probably have some personal connection to the brands, they mean something to you so you have a sort of stake in what comes out the other end. Which makes it not only more educational and illuminating, it makes it just more damn fun.

So kudos to Noah and thank you very much. Except…

Except I want more.

Collabs opens the idea that you can take one brand and all the things it means and represents and mix it with another brand and all the things it means and represents, and come up with some potentially, utterly new thing. What I am frustrated by is that the AI platform appears to be looking for patterns that are primarily visual – colors, textures, logos, designs and, um, patterns. It’s not looking at strategy. It’s not looking at concepts.

Take for example, the collab between Lego and Nike. Now of course, I think it’s entertaining that the logo on the front of the shirt says “Liike” – that’s very clever.

And yet, the output is just a t-shirt with a swoosh and some brick-type elements. Come on.

Can we dive a little deeper into these brands to produce something a little more remarkable? Like they’re both about empowerment, right? Whether that’s “Just do it” for Nike or “to inspire and develop the builders of tomorrow” for Lego. But different kinds of empowerment, perhaps. One more physical, one more intellectual (though, yes, I understand that there’s an intellectual component to sport and a physical component to constructing complicated models).

And further, Nike – and of course I’m overstating here, so calm down, Beaverton – is about improving yourself, while Lego is about creating things outside of yourself. And yet because of that, they’re both aspirational. About being vehicles for accomplishing things you thought beyond you.

To be sure, these are all “concepts” as opposed to facts. But Tim Hwang has insisted that AI fundamentally is a concept retrieval system, as opposed to a fact-retrieval system. So this should be right up AI’s street, right?

Indeed, while they are concepts, they are also words in the sentences of a language we use to communicate to each other to help give our lives some meaning.

And that’s what I want. I want to collide the meanings. To find new meanings. Because perhaps when we collide the meanings we will find new insights into the brands people use and thus, in to the very people themselves.

Look, obviously I’m no coder and the ways of AI frighten and confuse me. So what I’m asking for may be ridiculous, absurd, impossible.

But hey, ridiculous, absurd, impossible - that’s AI’s sweet spot, right?

I’m Only Bleeding: Why Most Hospital Advertising Sucks

Consumers waffle, products obsolesce, and tactics come and go with the newest technological upgrade. But generally speaking there are always two things that we as advertising people can count on when we are asked to make an ad for a client’s product.

First, that the people we’re talking to have a need that our product fulfills. For without a need there’s no reason for us to be having a conversation at all.

And second, that there is some kind of opportunity for us to make those people want our client’s product’s particular way of fulfilling it.

Except in hospital advertising, where neither of these things is true.

And frankly, this is one of the things that makes it so interesting. And also, why there’s so much terrible hospital advertising being foisted upon the public.

Wait, what? 

Most of the time, hospitals are advertising services to people who don’t need them. “#1 cancer center”? Yeah, well, I don’t have cancer. “Top maternity services”? Um, sorry, not pregnant right now. “Leading cardiac care unit”? Thanks boss, the old ticker is just fine.

To translate this into the language strategists try to impose upon poor creatives during briefings, the consumer by and large is not in the hospital CATEGORY. They’re not currently hospital users. Furthermore (and more troubling for the aspiration-obsessed world of advertising), they don’t aspire to be hospital users. That’s because – with the possible exception of maternity – hospitals imply disaster, disease, and crisis. And who aspires to that. No one. So the public is not in the category, which is the opposite of the situation when we’re talking about a soda (thirst), or car (transportation) or tv show (entertainment) or pretty much anything else.

And if they’re not in the CATEGORY, how the hell can I make them select MY client?

This is why you end up with advertising that basically says “that disease you don’t have and hope you never get? Yeah we’re better at it than other people.”

Compelling right?

But wait, it gets worse. And it has to do with that word “better”.

Because even the people who do respond to your message, the ones who, for whatever reason, do have a need they think your hospital can fulfill – they really have no way of evaluating whether or not your offering is really, actually, “better”.  And if they can’t perceive you as meaningfully better, then they won’t choose you over your competition (I realize that’s really obvious basic advertising fundamentals there, but it bears reminding).

A car, I can drive – and I can like the handling or the power or the suspension. Or I can not. But I can call one “better” than another for those reasons. A beverage I can like the taste of, or not. A movie can entertain me or not. But a doctor? How do I know if this person knows what they’re doing? I have a disease, they sit me down, they explain things. They ask me if I have any questions. How the hell should I know? I’m not a doctor. All I want is the best. All I want is to know that I’ve got the best doctor for whatever the hell it is that ails me. And I want to know this while 1) I’m not feeling great (because I’m sick, remember?) and 2) I’m sort of terrified that I won’t find out whether I do have the best doctor until its too late.

So what I do is what most advertising does – default to externalities.

If they went to a good school, they must be good. Or if they won awards, they must be good. Or if they have cutting edge technology they must be good.

Except, really?

The most successful college football program of all time hasn’t had an MVP in the NFL in forever while conversely arguably the greatest NFL quarterback of all time couldn’t get off the bench at his alma mater. Or said another way, “past performance is no indication of future earnings.”

And awards? Citizen Kane wasn’t awarded best picture in 1942, James Joyce never was awarded a Nobel Prize for literature, and Jim Brown wasn’t awarded the Heisman. So you know, sometimes who gets awards isn’t the best measure of actual quality. Plus there are so many of them now, split into so many confusingly different and nuanced categories that it is almost impossible to determine what’s really meaningful.

And technology? Just because you have some big machine I’ve never heard of in some speciality that I’m not suffering from, doesn’t mean you’re great. My uncle has a Porsche 911 GT3 RS but that doesn’t make him Dale Earnhardt. And I know a brain surgeon who literally cannot operate his DVR.

And remember, that’s for the relatively small group of people who actually do care about your advertising. For the rest of the public who’s seeing your billboard, your tv spot, your radio commercial, your proof points are, well ridiculously irrelevant. It’s like you’re saying “our buggy whips went to Harvard!” “Our buggy whips won this award you’ve never heard of!” “Our buggy whips use cutting edge technology”.

Framed this way it becomes really understandable why not only is so much hospital advertising terrible, so much of it looks so similar. You can literally swap logos on creative and no one will know the difference. How do I know this?. Because I’ve done it and they didn’t.

So you have to start from a different place. Not from outside, but inside.

You have to figure out what makes the hospital unique and special – to the community, to its employees, to its patients. Not the category things.

Specific things, but not anecdotal.  

You have to start by remembering that you are talking to one person. Not 20 million eyeballs on a website, not 15 million viewers on a tv spot. One person.

One person who, if they are in the category, are worried, upset concerned, Because they, or someone they care about, are sick.

It’s kind of like, everything you do for every client, you have to do it dialed to 11 with hospital advertising. The focus, the uniqueness, the nuance, the attention to detail, the insight.

Because the thing that the people who are in the category want, is trust. They want to be able to trust you. And the thing the people who aren’t in the category will remember, is that they feel they can trust you.

You know, like with any client, really.

It’s just more important with hospitals because the stakes for your customers are literally life or death.

Failsafe: Quick thoughts on the Executive Order on Safe, Secure and Trustworthy Artificial Intelligence

On October 30th, President Biden issued an Executive Order on “Secure, and Trustworthy Artificial Intelligence”. Here are some quick thoughts on what that means.

First, “Executive Orders” as you may recall, are declarations by the President directing Federal officials or administrative agencies to engage in, or refrain from, courses of action. In this case, President Biden directed several agencies to investigate and act upon different aspects of AI.

For example, this Executive Order asks the National Institute of Standards and Technology to set “rigorous standards” to ensure that AI systems are “safe, secure and trustworthy”. It also says that the Department of Homeland Security will “apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board”. It says that the Department of Commerce “will develop guidance for content authentication and watermarking to clearly label AI-generated content.” And it directs the National Security Council and the White House Chief of Staff to develop “a National Security Memorandum that directs further actions on AI and security.”

In many respects, the document starts the wheels rolling on investigating how we can protect ourselves from AI while at the same time investigating how we can use it to make our lives better. And it wants to start those investigations of AI in the areas of national security, biological materials, fraud, cybersecurity, privacy, civil rights, consumer protection, patient rights, students, workers, government, and international leadership.

If that sounds like a lot to you, well that’s comforting because it sure sounded like a lot to me.

And how will it do these things? And to what end? Well it doesn’t say, which yeah, I get it – this is the beginning and if you could lay out every nut and bolt you probably wouldn’t need an Executive Order in the first place.

But because it doesn’t have any specifics and because it looks like it’s trying to do everything and the kitchen sink, it’s hard to not to look at this as a very political document. Especially when you remember that the UK AI Safety Summit is taking place on November 1 and 2 at Bletchley Park (which you may remember as where Alan Turing spent much of World War II breaking codes and inventing computers).

So yeah, sure, how could even the most non-political document in the world not look like a political document in these times where this topic is concerned. Especially one that is trying to touch as many different parts of our society and culture as this one is because, well, that’s what AI will do too…

That said, one would like to think that the Executive Order’s diversity of focus and purpose is actually its strength; that if you gather people who specialize in lots of different areas all studying the impact of the same thing on their particular area of expertise, you’ll get a robust and innovative series of solutions when they all come back together again.

Does government work like that? Probably not but hey there’s always a first time.

Additional Reading: 

The Executive Order on Safe, Secure and Trustworthy Artificial Intelligence

Forbes: What Biden’s New Executive Order Could Mean For The Future Of AI

New Atlanticist: Experts react: What does Biden’s new executive order mean for the future of AI?

Center for Strategic and International Studies: The Biden Administration’s Executive Order on Artificial Intelligence

Is AI going to kill us or not?

Even as people tell us all of the amazing things AI is going to do for us, just as many people are telling us how AI is going to destroy us. Which is confusing enough (I mean, can’t something just be good for a change?) except that in many cases, the people leading the doomsayers are some of the very same people who made AI in the first place. Which is like Henry Ford saying “Sure cars are great, but they’re probably gonna kill over 42,000 people a year you know, so…”

To get some clarity, I dug into the open letter that the Future of Life institute (FLI) published (which has over 30,000 signatories, including everyone from Steve Wozniak to Yuval Noah Harari to Elon Musk), along with a supporting document the organization created of “principles” (5700 signatories), and then some FAQs that they provided, along with information from a few other places in order to figure out what the hell was going on. As you might imagine, there was a lot to digest and a significant amount that went over my head (if I may mix those body metaphors). But there were a few things that leapt out at me.

And the short answer is yes. AI is going to kill us or not.

The longer answer is um longer. And it starts with slaughterbots.

Number 18 in the FLI’s principles is the following:

“An arms race in lethal autonomous weapons should be avoided.”

I don’t even know what that means and I’m already afraid.

“Autonomous Weapons Systems (AWS) are lethal devices that identify potential enemy targets and independently choose to attack those targets on the basis of algorithms and AI.

“The U.S. Department of Defense described an autonomous weapons system as a ‘weapons system that, once activated, can select and engage targets without further intervention by a human operator.’

“Lethal autonomous weapons and AWS currently exploiting AI, under development and/or already employed, include autonomous stationary sentry guns and remote weapon stations programmed to fire at humans and vehicles, killer robots (also called ‘slaughter bots’), and drones and drone swarms with autonomous targeting capabilities.”

(from “The weaponization of artificial intelligence; What the public needs to be aware of” by Birgitta Dresp-Langley, Director of Research at the Centre National de la Recherche Scientifique)

Okay, now I know and I don’t feel any better.

Obviously I concur with the FLI – an arms race in these things should definitely be avoided. Will it? Well if the history of the world is any indication, no. Look, Alfred Nobel created dynamite and he thought it would end war. Oppenheimer felt the same way about the bomb. You think this is gonna be any different? Me neither. Chalk one up for AI killing us – or at least, giving us humans one more new way to kill ourselves.

(Oh and did you notice how I blithely passed over the fact that these things already exist? Pretty slick, huh? And, you know, terrifying. So maybe chalk two up for AI killing us).

How about our jobs? Is AI going to take our jobs?

Well, maybe, kinda, sorta, no?

“Frey and Osborne (2013) estimate that 47% of total US employment is at risk of losing jobs to automation over the next decade.

“Bowles (2014) uses Frey and Osborne’s (2013) framework to estimate that 54% of EU jobs are at risk.”

(From “The impact of artificial intelligence on growth and employment” by Ethan Ilzetzki, Associate Professor, London School of Economics (with Suryaansh Jain))

“Artificial intelligence (AI) could replace the equivalent of 300 million full-time jobs, a report by investment bank Goldman Sachs says.”

(From “AI could replace equivalent of 300 million jobs – report” by Chris Vallance)

That all looks bad, but it’s actually a mixed bag. The general consensus – with a healthy dose of caveats – is that the net-net is to the good – there will be more employment, more jobs, and more revenue. But that’s over all.

“The World Economic Forum concluded in October 2020 that while AI would likely take away 85 million jobs globally by 2025, it would also generate 97 million new jobs in fields ranging from big data and machine learning to information security and digital marketing.” (https://cepr.org/voxeu/columns/impact-artificial-intelligence-growth-and-employment)

That’s obviously a net gain of 12 million jobs, but it is ridiculous to think that all of the 85 million job losers will get jobs in the new fields. Or even that all the new jobs will be spread evenly across the geography of job losses. AI is a disruptive technology. It will require people to retrain themselves for new jobs in a new economy. You know, like the internet did. (*cough *cough). And if you think that’s going to be a simple and obvious and automatic thing, I suggest you ask your local coal miner how his new career installing solar panels is going.

In short, yes, it will take the jobs of some of us. But it will also employ others of us. And, if any of the projections are accurate, it will employ more of us than are employed now. Which is a good thing, right? Unless you’re not able to retrain or get a new job. Which would make it a bad thing.

Oh and speaking of “bad things”, the FLI weighs in on an interesting aspect of “retraining” for us to be afraid of that I hadn’t thought about. Number 22 on their list of principles is:

“Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.”

This starts to get to the heart of what motivated the FLI and its signatories in the first place, and why they’ve asked for a six month “pause on development” from all of the labs. They are concerned about the ability of AI systems to self-improve in ways that are good for them (that is, for the AI), but that are harmful for humans. In other words, if humans are not there to actively monitor the “improvements”, the AI could “improve” themselves right out of needing humans around at all. Think of this as a charming combination of “autonomous weapon systems” and “job insecurity” applied to every aspect of your life.

“Oh Martin, you’re exaggerating. You old copywriters are so dramatic, looking at every exciting technological advancement as a disaster. You probably would have complained about fire when the cavemen discovered it. You just need to calm down.”

Okay, but there’s this thing called AGI – Artificial General Intelligence:

“Current AI systems are becoming quite general and already human-competitive at a very wide variety of tasks. This is itself an extremely important development.

“Many of the world's leading AI scientists also think more powerful systems – often called ‘Artificial General Intelligence’ (AGI) – that are competitive with humanity's best at almost any task, are achievable, and this is the stated goal of many commercial AI labs.

“From our perspective, there is no natural law or barrier to technical progress that prohibits this. We therefore operate under the assumption that AGI is possible and sooner than many expected.”

(From “FAQs about FLI’s Open Letter Calling for a Pause on Giant AI Experiments”)

In other words, AI is just the tip of the iceberg, the cute funny thing we use to make goofy memes that make our friends laugh on social media, that’s opening the door for the 800 pound gorilla that will tear your head off.

And this literally from the guys who made the “cute funny thing” in the first place. Chalk another one for the doomsayers.

But here’s why you should not be doomsayer. In fact, here’s why you should actually be optimistic about the future of AI (if you can believe it).

“Designers and builders of advanced AI systems are stakeholders in the moral implications of their use, misuse, and actions, with a responsibility and opportunity to shape those implications.”

This is buried deep in the FLI’s “principles” document, and it’s sort of stunning. First – it shows a level of self-awareness that is rare in creative people of any stripe. I mean, imagine spending a significant portion of your life thinking about theoretical stuff that most people didn’t understand, which ultimately leads to the creation of a technology that is incredibly innovative and breakthrough and world changing. And all of a sudden everything you’d been working on for years was in the news and people knew who you were – and then you suddenly were confronted with the fact that what you’d been doing as a lifelong intellectual exercise has resulted in something that could end life on earth as we know it. So you decide to try make sure that never happens.

People just don’t do that. Zuckerberg didn’t say “yeah, it’s possible that this little thing I’m whipping up in my dorm room will be used by authoritarians and other power-crazed assholes to try to steal elections and spread disinformation and destroy lives, and so it’s on me to make sure that doesn’t happen”. And Tim Berners-Lee didn’t say at the dawn of the internet “Yeah, I think this could be really great, but I’m accepting the responsibility for making sure it doesn’t fuck everything up.”

But more importantly, it’s remarkable because it offers us a new path to success. One not born from the typical Manichean yes/no, black/white, AI is good/AI is bad world that 99% of what we do lives in.

A third way, as it were, that incorporates our humanity and our technology, working together, to meet these challenges.

Which frankly is the only way success ever really happens anyway. Just ask Oppenheimer.

It’s the End of the World as We Know it: Thoughts on the Demographic Cliff

Recently we were discussing with our client at Drexel University the phenomenon that higher ed organizations have been wrestling with for the past few years – and which has accelerated lately – called “The Demographic Cliff” (Drexel posted a really interesting piece on it here).

Basically, this is the observation that the number of traditional college-age students in the U.S. is expected to peak in 2025 or 2026 and then decline precipitously – experts are predicting 15% lower. That’s because during the Great Recession of 2008, birthrates plummeted and now, 17 years later, there are fewer kids in the pool to go to college. And as if that weren’t enough, this smaller pool has been exacerbated further by a general trend since Covid of declining enrollments (down 7%) and by the fact that increasingly kids are finding that a college degree is less mandatory for a career. (There’s a terrific article on these additional causes here).

And as concerning as this is for us and our university client, it got us thinking about the broader implications – not just to education (although with a major university as an account, of course that matters to us) but to the wider economy. Because those fewer 17 and 18 year-olds aren’t just going to “not go to college”. They’re also “not going to go into the workforce” which means fewer of them to hire – to, at best, fill roles in an expanding economy, and at worst, replace the Baby Boomers who will – you should pardon the expression – begin dying off about then (the top end of the Baby Boomer generation are, at this writing in mid-2023, 77; median life expectancy for women is 79 and for men is, gulp, 73. You do the “2025-2026” math…)

But wait, there’s more good news.

Because these missing “not go to college” and “not go into the workforce” 17- and 18 year-olds will also “not become consumers”. Which means there will fewer people to buy products. Your products. Which means the fight for customers is about to get more intense – just to stay at par, let alone meet those increased sales goals that your boss just dropped on your desk.

Just like it is right now for colleges and universities.

And if all of this sounds hard to believe, well, that’s understandable. Because America hasn’t faced a situation quite like this in decades (some research indicates in forever – like literally since the founding of the nation). During the post-World War II boom, each year brought more consumers, for a long time in the form of Baby Boomers. And even when population growth slackened a bit, as it did with the Generation X, you still had all those Baby Boomers out there driving sales and you had the Millennials right behind them who there were even more of than Baby Boomers!

But the coming years forecast a significant slowing of population growth. The CBOE predicts that over the next 30 years, the population will grow about a third as fast as it did from 1980 to 2021, and with more than three-quarters of that growth coming from immigration (indeed, by 2043 all population growth will be driven by immigration. Which raises a whole other bunch of questions, which we’ll have to address at another time).

So if all of this sounds like stuff you didn’t see in your brand manager text books back at B-school, well, that’s because it wasn’t there, because as we said, it’s never been like this in the U.S. before, certainly not in the modern era economy.

You’re welcome.

So what do you do? We see three options.

Option 1 is to give up. The problem is too hard, this wasn’t part of the deal when you signed up, see ya later. Not a great option, but hey, it’s an option.

Option 2 is to pretend all that data is wrong and to just carry on as you always have done. Which, you know, also not great, though it may work for a while. It’s kind of like the story Steve McQueen tells in “The Magnificent Seven” about the man who climbed to the top of the tallest building in his town and jumped off, and as he passed each floor on the way down, people heard him say “so far, so good…”.

Option 3 is to double down on the things that have made organizations successful when faced with increased competition for any number of reasons. Because that’s what this really is, right?

Of course, usually increased competition is driven by more actors fighting over a finite number of customers. But a finite number of actors fighting over a smaller pool of customers is fundamentally the same thing. So it makes sense that it might require a fundamentally similar approach.

Namely, to be clear and compelling about what makes you different and therefore better, and derive that difference from a clear understanding of what your customer needs, not what you want to sell.

“Better” of course, as defined by the people you’re trying to attract. It’s not what’s better to you, it’s what’s better for them. And really better, specifically better, not broadly better as just, you know, human beings.  

And this “better” will invariably come out of an understanding of what their needs are – not the needs you’ve foisted upon them or even invented out of a wholly inaccurate understanding of them. Needs that will be as difficult for them to articulate as they always have been, made more complicated by the fact that there are fewer respondents available to articulate them. Needs which will likely be about as severely impacted and altered by the demographic changes as yours are.

Most organizations, of course, won’t spend the time, effort and money figuring any of this out, in any meaningful way. As a result, their communications will be what they always are when you don’t do the work - fuzzy, ill-defined, and often, an attempt to appeal to as many people as possible. Which, you know, ends up appealing to no one.

Those that do will have a double advantage – the advantage of doing something that could actually work during the end of the world as we know it, and the advantage of doing it amidst competitors who are doing something that will not.

And that, hopefully should make you feel fine.

I Know This Much is True: Thoughts on AI Hallucinations

AI is amazing. For example, it’s revolutionizing search so you can find stuff faster and more efficiently than ever before. Like in 2023, when someone asked Google’s Bard for some cool things about the James Webb telescope he could tell his 9-year-old, and right away it reported that the telescope took the very first picture of a planet outside of our solar system. Cool, right? And at the other end of the spectrum, in 2022, when a researcher was digging into papers on Meta’s scientific-specific AI platform Galactica, he was able to find a citation for a paper on 3D human avatars by Albert Pumarola.

Unfortunately, both of these results were bullshit.

The first picture of a planet outside our solar system happened 17 years before the James Webb telescope was launched, and while Albert Pumarola is a real research scientist, he never wrote the paper Galactica said he did.

So what the hell is going on?

Both of these are cases of “hallucinations” – stuff that AI just gets wrong. And while those two examples come from LLMs (“Large Language Models” – “text based platforms” to the rest of us), they also happen – with spectacular results – in image-based generators like Midjourney (check this horror show out). But right now, let’s stay focused on the LLMs, just to keep us from losing our minds a little.

And let’s start by reminding ourselves what AI really is: a feedback loop that generates the “next most likely answer” based on the patterns it sees in the data you’re exposing it to. So, hallucinations (also called “confabulations” by the way) occur because, as Ben Lutkevich at TechTarget.com writes, “LLMs have no understanding of the underlying reality that language describes.” Which, interestingly enough, is fundamentally not how humans understand language. As Khari Johnson writes in Wired.com (as reprinted in Arstechnica.com):

UC Berkeley psychology professor Alison Gopnik studies how toddlers and young people learn, to apply that understanding to computing. Children, she said, are the best learners, and the way kids learn language stems largely from their knowledge of and interaction with the world around them. Conversely, large language models have no connection to the world, making their output less grounded in reality.

In other words, for humans, language – words, etc. – represent things in the real world. But for LLMs, words are just the elements in the patterns that they see in the data. Which, yeah, they are, but for humans, those “patterns” are in service of something called “meaning”, which for LLMs they’re not. They’re just patterns. And because patterns are a significant part of language, when AI platforms replicate them in answers back to us, their results sound believable. Like the telescope thing. Or the scientific citation.

But I also think there’s another reason why they work on us. We’re sort of pre-programmed to believe them just because we asked the question.

Think of it this way. If you’re looking for information about something, in a sense, you’ve created a pattern in your head for which you are seeking some sort of reinforcement – that is, an answer that fits into the pattern of your thinking that generated the question. Like the telescope example above – one could assume from the question that the person already has some awareness of the telescope and its abilities. Perhaps they’d read this article in Smithsonian magazine about seven amazing discoveries it had already made - but felt that the article was too esoteric for a nine-year-old. The point is, they had an expectation which is, I think, a sort of form of pattern. So when the LLM provided an answer, it plugged very neatly into that pattern, creating an aura of truth around something that was fundamentally false.

And in a sense, this is not new news. Because as every grifter will tell you, for a con to succeed, you gotta get the mark to do at least half the work. And where AI hallucinations are concerned, we sort of are.

So, hallucinations are bad and we have to be on our guard against them because they will destroy AI and us, right?

Well, no, not exactly. In fact, they may actually be a good thing.

“Hallucinations” says Tim Hwang, who used to be the global public policy lead for AI at Google, “are a feature, not a bug.”

Wait, what?

At the BRXND conference this past May, Tim used the metaphor of smartphones to explain what he meant. First, he reminded us, smartphones existed. Then, he explained, a proper UX was developed to not only use them effectively, but to take advantage of their unique capabilities, capabilities that revolutionized the way we think about phones, communicating, everything. Tim believes we’re in a similar, sort of “pre-smartphone-UX” stage with AI, and that because our interfaces for it are extremely crude, we’re getting hallucinations. Or perhaps said another way, the hallucinations are telling us that we’re using AI wrong, they’re just not telling us how to use it right yet.

This “using it wrong/using it right” idea got me thinking as I plowed through some of the literature around hallucinations and came across this from Shane Orlick, president of writing tool Jasper.AI (formerly “Jarvis”) in a piece by Matt O’Brien in APNews:

“Hallucinations are actually an added bonus,” Orlick said. “We have customers all the time that tell us how it came up with ideas — how Jasper created takes on stories or angles that they would have never thought of themselves.”

Now sure, this could just be a company president looking at the hallucinations his AI is generating as a glass half full, as it were. But it got me thinking about the idea of creativity – that in a sense, hallucinations are creativity. They may not be the factual answers you were looking for, but they’ve used those factual answers to springboard to something new. You know, like creativity does.

I mean, who among us has not sat in a brainstorm and come up with some wild idea and had someone else in the room say “well, yeah, that makes sense, except for this and this and this” (just me? Oh…). How is that different from the hallucinations we started this essay with? “Yeah, that James Webb Telescope fact makes sense because an exoplanet is the kind of thing it would see, but it didn’t take the first picture of one because of this and this and this.”

And better yet how many times have you sat in brainstorms and someone came up with an idea that wasn’t perfect, but that was great nonetheless, and that the team was able to massage and adjust to make it perfect? Why couldn’t you do that with AI Hallucinations?

Could the path forward be, not the elimination of hallucinations, but the ability to choose between outputs that were proven, documented facts and outputs that were creativity based on proven, documented facts? Two functions serving two needs, but resident in one place. In much the same way that in the early days of the internet, we had to wrap our heads around the idea that sometimes we went to a website for facts and information, and sometimes we went to play (and sometimes we went for both. Okay, forget that last example).

Now look, I could be completely wrong about all of this. About hallucinations, about telescopes, about what Tim Hwang meant, about the nature of creativity, about the early days of the internet, about all of it. But it would seem to me that inquiry, even one as faulty as mine, is likely the best path to untangling AI, especially in early days like this and especially as we encounter challenges like these. Or said another way:

“The phenomenon of AI hallucinations offers a fascinating glimpse into the complexities of both artificial and human intelligence. Such occurrences challenge our understanding of creativity and logic, encouraging us to probe deeper into the mechanics of thought. However, we must approach this new frontier with a critical and ethical perspective, ensuring that it serves to enhance human understanding rather than obscure or diminish it.”

You know who said that? Albert Einstein. At least according to the internet. And he was pretty smart so that made me feel much better about hallucinations. And you should too. I think.

Follow the Money: A Different Way of Thinking about "Class"

People don’t read, or so I am told. Business people doubly so. Those who inexplicably do, do not read fiction. And those rare few who do read fiction, do not read 19th century fiction.

And yet it occurs to me that buried in a 19th century novel was an insight into a better way to think about class – and therefore, how to market to people – than anything I’d read in more recent, or business-related, books. And it was this:

Annual income twenty pounds, annual expenditure nineteen nineteen and six, result happiness.

Annual income twenty pounds, annual expenditure twenty pounds ought and six, result misery.

I know what you’re thinking. You’re thinking “Duh.” You’re thinking “Spend more than you have and you’re unhappy. Spend less and you aren’t. What could be more obvious?”

And I agree with you.

So why aren’t we applying that to how we define economic classes?

Because our current terminology is frankly meaningless. Virtually every person I have ever met has told me that they had a “middle class” upbringing. And I can tell you categorically that what each of them meant by “middle class” was wildly different from a financial perspective.

That doesn’t mean they were lying. Instead, I think they were measuring based on the simple fact that they had friends or acquaintances or neighbors who had more money than they had, and they had friends and acquaintances and neighbors who had less money than they had. Which meant that from where they stood, they were in the middle. Thus, middle class.

And of course in America, where there has always been a vast mixing of classes (significantly less so now than in the past, to be sure), one could almost always see oneself “in the middle”. A situation only exacerbated by popular culture and – wait for it – advertising, which shows everyone every great thing they can, and cannot, afford, cheek-by-jowl with a level of soul crushing poverty in parts of the nation one would never otherwise encounter. Are you “Real Housewives of New Jersey”? No? Are you “The Wire”? No? Then you must be middle class. 

But Micawber’s observation in Dickens’ 1850 novel may provide us with a better path. What if we measure less by context, and less even by raw dollars, and more by receivables and expenditures.

What if we said – broadly speaking – that anyone making one dollar and more than they needed to meet their expenses, was rich. And that anyone making one dollar – and less – than what they needed to pay their bills, was poor. And that anyone making exactly the same as their expenses – was middle class.

This would do two things right off the bat (three if you count annoying every economics teacher I ever had). First, it would speak better to psychological drivers than the old model by disconnecting the definitions from dollar amounts. The lawyer pulling down a cool million a year – but who has annual expenses of 1.5 is living paycheck to paycheck as much as any “poor” person you would normally think of. And the pizza delivery guy who paid off his house and his car, and who has no credit card debt and no loans, and whose favorite vacation is camping down the road and fishing – that guy’s got more income than he can spend. And how else would you describe a “rich” person?

And sure, you can quibble with their decisions – look at all that alimony the lawyer is paying! Three x-wives? Wow! - or - Why doesn’t the pizza guy take a trip to Italy or buy a newer car? But practically speaking, their decisions are only relevant if we measure class purely by acquisitions and attainments. Of course, it may sound odd for a guy in advertising – the front line of consumerism – to advocate such an approach. But people in advertising know probably better than anyone how fleeting and ephemeral are the purchases people use to identify their success – so measuring anything meaningful by them is, um, meaningless. Which is probably why we push them so hard, I suppose.

And second, a taxonomy like this disconnects the groupings from geography. Every day, people are managing their personal finances in terms of national if not global economic trends. Your class isn’t really based on where you live any more than your entertainment choices are. Sure, sure, it’s more expensive to live in Manhattan New York than it is to live in Manhattan Kansas, but inflation, medical costs, grocery, gas and housing prices tend to trend similarly everywhere, even if they differ in degree specifically. A measurement that thinks of those broad trends in the context of personal economics is necessarily more useful to understanding why the people in those groups do what they do than one that says “you make more than X? Congratulations, you are no longer poor.” With ramifications professional, political and personal. 

Look, I’m no economist. I’m just a simple copywriter who’s trying to understand why people do what they do. Perhaps I’m wrong. Perhaps I am not. Or perhaps we should just use this new model until, as Micawber himself would advise, something better turns up.

Programmed By Fellows with Compassionate Visions: Some Thoughts on Constitutional AI 

Stop me if this has happened to you. You type a simple prompt into some handy AI generator and what comes out is more toxic than a landfill at Chernobyl. I mean, not just a little “off” but like wildly, deeply, disturbingly off.  

And then you remind yourself, oh, yeah, AI is just sophisticated math that looks for patterns in the data it is exposed to, and if the “data it is exposed to” is, you know, “the internet”, then it’s not that surprising that sometimes it produces content that is toxic, harmful, biased, sexist, racist, homophobic etc., since that stuff exists on the internet.  

Which makes sense, even if it doesn’t make it okay, right? 

So how do you make that not happen? Well, currently the strategy is mostly, “have humans look at the outputs and freak out if something horrific is being delivered and then fix it”. Which is fine, expect for two things.  

First, the point of AI, or at least one of the points of AI, was efficiency, how it freed humans up to do other things with their time. And if you have to go back and look through everything it’s doing and check to make sure it’s not horrifying, then it's less efficient. I mean, you might as well just write the stuff yourself. 

And second, you can’t scale humans. Again, one of the values of AI is the sheer quantity of content it can output insanely quickly – a quantity that it’s not realistic to have humans check over with the digital equivalent of a fine toothed comb. And, it should be noted, a quantity that is only going to get larger as AI evolves. 

So what do you do? 

Many have been exploring something called constitutional, or principles-based, AI. And recent advancements by a company called Anthropic (founded by former OpenAI rocket scientists and funded with some serious Google VC money) has been getting attention with developments they’ve made in this area to their own Generative AI platform called “Claude”. 

So what’s constitutional AI? 

In much the same way that a government has a series of rules and laws that reflect what it believes and what it feels is proper – and codifies those rules and laws in a constitution – constitutional AI does the same thing for AI. A human creates a set of “rules” and “laws” that sort of sit on top of what the AI is doing, to act as a check on the content.  

Sort of like, you ask the AI a question, it generates an answer based on the patterns it finds in the data you’re exposing it to, and then constitutional AI checks it to make sure it’s not generating an answer that’s horrifying.  

Or said another way, that it is generating an answer that is aligned with the beliefs and principles you’ve established in the constitutional AI. 

And it does it crazy fast, and it does it crazy voluminously because, you know, it’s AI and that’s how AI rolls. 

Which is great, right? Right. Hooray for progress. 

Now, what’s interesting about all this – or among the things that are interesting – is how in a sense, constitutional AI is sort of a very AI way of solving this problem. AI basically says “these are the patterns I’m seeing in the data”, right? So if you feed it data that says that the earth is flat, its gonna tell you the earth is flat, right? Because that’s the pattern. 

And if the constitutional AI you have sitting on top of it is filled with criteria like “discard any responses that endorse a non-flat earth viewpoint” well, you’re still gonna wind up with flat earth answers. A feedback loop on top of a feedback loop, as it were. And that feels dangerous because on the one hand, it’s reinforcing the biases, on the other hand, I don’t know it’s reinforcing the biases unless I dig into what the “criteria” is, and on the other other hand, how the hell is all of this making things faster and more efficient for me? 

Now you may say that I’m being absurd. And yeah, I get that a lot. And it’s entirely possible in this case since I’m still learning about AI. But here’s why I’m being absurd.  

Because a lot of the language in the literature I’ve been reading in this area keeps referring to “common sense”. That when they’re creating these constitutional AIs, humans will be providing “common sense” criteria “because AI doesn’t evaluate, it just looks for patterns.”  

Which right, I get that. Except in my experience, common sense is usually not that common. 

Look at the “common” things Americans can’t come to a “common” agreement on right now – about race, sex, gender, history. So what is this “common sense” that the literature acts as if it’s so obvious to all of us that it will obviously be inserted into AI as some sort of obvious criteria?  

And you know what else common sense isn’t? It isn’t static. Read what was “common sense” about race, sex, gender, history - 50, 75, a hundred years ago. About intellectual capacity. About morality. Things that would be horrifying today. Well, to some of us.  

Which means that periodically humans will have to update the “common sense” of the constitutional AI. Who? When? How? Because we’re not just talking about software upgrades due to advances in technology. We’re talking criteria around real cultural issues that will affect – often invisibly – the content that we will increasingly be relying on to provide us information.  

Now to be clear, I am in no way saying that constitutional AI is a bad thing. It’s a very valid attempt to solve a very real problem that will only get very much worse the longer we ignore it. And I applaud everyone who’s working on it. 

I just want to make sure we’re actually solving it, not just turning it into another problem instead. 

For Better or Worse: Thinking Differently About Problem-Solving

Some time ago I was cutting my grass, including the leaves and branches there, when my lawnmower suddenly made a horrible grinding sound, so I stopped. It turns out that what I thought was a pile of leaves, was in fact a pile of leaves and a wooden rake, and my lawnmower blade was hopelessly entangled with the tines. Well, not “hopelessly”, actually. I knew that it would just take a few minutes of wangling to separate it from the blade.

However, because of the way the rake was stuck into the blade, I was going to have to tip the lawnmower over and temporarily flood the engine. Which meant once I got the rake dislodged, I was going to have to wait about 20 minutes for the lawnmower to start. But if I didn’t tip over the lawnmower, well, I wasn’t going to be able to get the rake out, which meant I wasn’t going to be able to use the lawnmower to cut the rest of my grass.

In other words, it occurred to me that I had to actually make the situation worse before I could make it better. And this was kind of a revelation.

Because we usually don’t think that way. We usually think linearly. Here is a problem. I will solve it by doing x. Now things are better.

But this was sort of the opposite of that. And as ridiculous as it sounded, I was actually living it. I turned over the lawnmower, I flooded the engine. I extricated the rake. I turned the lawnmower back over. I tried to start it. It would not start. I waited twenty minutes. I started the lawnmower and went back to cutting the grass.

Worse, to make it better. My mind boggled. So I took this observation to friends of mine, friends who are smarter than me of course, to show them my discovery.

“Look at this” I said to a surgeon friend. “Sometimes solutions aren’t linear.” I said. “I had to make a thing worse to make it better! What do you think of that!”

“I think you’ve just described surgery” she said.

“I’m sorry, what?”

“Well, do you really think that opening someone up, exposing their inner organs to the outside world, rooting around in their sinews and blood and muck is actually making them instantly better? Of course you don’t. If you did, you would expect people to hop off the operating table ready to run a marathon. But nobody expects that, do they?” “well, um…” “There’s a ‘recovery time’, right? Recovery from what? Recovery from the surgery, from what we did to you. The very existence of ‘recovery time’, the very fact that everyone is so used to that idea, is proof that the idea of making things worse to make them better is, well, obvious.”

Disappointed but undaunted, I went to a friend of mine who teaches mathematics and said the same thing. That I had this thing that was a problem, and I sort of made it worse, in order to make it better. That this idea seemed to run counter to the way I thought things worked, you know A plus B equals C.

“Because it’s not really about addition, is it. It’s more like multiplication.” “I’m sorry?” “Where a negative times a negative equals a positive. Surely you remember that, right?” “Um, well…” “Did you not take mathematics in middle school?”

Now, setting aside my disappointment that I had not discovered some fascinating new … something … AND the fact that I seem to have fairly snotty friends, it occurred to me that if this is true in mathematics and medicine and, well, gardening, then perhaps it could be true in advertising.

That is, clients come to us with a problem: “Sales are down” or “Awareness is bad” or whatever. And they expect us to come up with solution that will make things better. You know, A + B = my boss is happy now.

But what if some problems in advertising are like the rake and the lawnmower? What if some problems need to be made worse before they can be made better? What are those problems? I don’t know – and its entirely possible that they’re very specific to each situation. But now that I’m aware of this idea, I wonder how many problems I’ve mis-diagnosed and provided less than adequate solutions for.

Which is not to say that if I had told the client “we have to make this worse first” that they would have reacted positively. Clients don’t want to hear “worse”. Clients’ bosses don’t want to hear “worse”. By and large, businesses are not built for it. Certainly stockholders are not.

And just to be clear, I’m not talking about some kind of William Westmoreland “we had to destroy the village in order to save it” thinking. That’s about obliteration; “making something worse” implies a deterioration within the context of the thing, not a total restart. I didn’t, for example, throw out the lawnmower. I just made it inoperable. Worse. That’s different from a sort of “blank slate, let’s start over” thinking - which is also a legitimate tool for problem solving, but which works by throwing the baby out with the bathwater. Like I said, I didn’t throw out the lawnmower and the rake and hire sheep to deal with my grass. I just made a bad situation worse, in order to make it better.

When I returned home, I found my son finally cleaning his room, as his mother had asked him to. Earlier it had been a mess. Now, it was a disaster. There wasn’t even a path from the door to the bed. Stuff was everywhere. I asked him what the hell he was doing. I told him this looked like a bomb had gone off. “Yes” he said, “I have to make it worse before I can make it better.”

I went back out to cut the grass…

That Thing You Do: Early Thoughts On AI

There’s a scene in Apollo 13 where Kevin Bacon needs to do some calculations. And he’s exhausted and under a lot of stress because, you know, he’s in a broken tin can a zillion miles from earth floating around with Tom Hanks and Bill Paxton. So he asks Mission Control in Houston if they can verify his math. And Mission Control says “sure, not a problem”, and then the camera turns to a row of math nerds with pencils who are going to run the numbers by hand, and then compare notes. That was how they did it in the days before calculators, in the days when computers filled a whole room and couldn’t be bothered to work on, you know making sure Kevin Bacon was toting up his figures properly. Four crew-cutted nerds with #2’s.

And every time I see that scene, once I get past the sort of archaic lunacy of it, I think “Really? That’s what those guys were there for? To do math? Couldn’t they have been doing something more, I dunno, important? Like maybe figuring out why the CSM LiOH canister and the LEM canister weren’t the same shape or something?

I’ve been thinking about all that a lot as I listen to everyone talk about AI and ChatGPT.

For most of our time on this planet, the only machines humans had were, well, humans. And yeah, the human body is great – it can do a lot of things. So if you don’t have a truck, well, you’re the machine that’s gonna get the load to market. If you don’t have a backhoe, you’re the machine that’s gonna dig the grave. And if you don’t have a calculator, you’re the machine that’s gonna run the numbers (crewcuts optional).

But like most machines that can do a lot of things, the trade-off is, it can’t do any one of those things exceptionally. Because it’s built for diversity, not specialization. A truck can make fewer trips than a human can, a backhoe can dig the hole faster, a calculator can run the numbers with fewer mistakes. But a calculator is less capable of dragging a load to market than you are and a truck is fairly useless where running the numbers is concerned. The human body can do all those things – not A+ perfect, but better than, you know, not at all. Which is the alternative.

When we look at AI and ChatGPT and all the others that have come out since I started writing this essay, it should be in that context: what have we been mediocre at that this new technology can free us from doing a mediocre job of, so we can focus on something we’re actually good at, indeed, better at than machines? As my buddy Howard McCabe asked, can it scan reams and reams of code for bugs faster and more thoroughly than a human can? Yep. And if it does, does that free up a human to think more deeply about what humans would really want that code to do and how they might use it? Yes it can. Because it can help us by doing better than us the things we are not built to do well. So why wouldn’t we want that?

But here’s what it can’t do. It can’t make quantity equal quality.

For while I think there are opportunities for it to free us up to do better work, I am concerned that we are falling into a trap that is rampant in advertising generally. Namely that more = more effective. Which, you know, no.

The fact is that more of what i don’t care about doesn’t make me care about it. More of what I don’t want, doesn’t make me want it. More is just noise, static, interference. More is just the stuff that actually gets in the way of the stuff that I do want cutting through. More is why people hate advertising (well, one of the reasons).

But “more” is the last refuge – well, the first refuge – of advertisers who are either too lazy or too stupid to really think about their customers. “More” is the strategy of marketers who don’t think their customers matter, or more dangerously, don’t think their own products matter, and so haven’t taken the time to find that unique quality, that unique difference, that unique thing that customers are missing and desiring that their product can provide, in order to really make a connection. They just say “What I say isn’t important - if that’s where my people are, that’s where I’m going to be too”. Well, yeah, pal but there were a lot of people at the Lizzo concert too, but 99.99% of them were only paying attention to one person.

“Just showing up” (as I have written elsewhere) is not a brand strategy, but a lot of what we are hearing right now is that AI and ChatGPT are the future of advertising because they will generate exponentially more content, which will let brands “just show up” an order of magnitude more than they do now. And agencies will likely fall for this because, well, there are a lot more bad, lazy and stupid ones than there are good ones. And this will undoubtedly elevate the public’s already keen ability to ignore the ads they see, and accelerate the development, use, and effectiveness of ad blockers and other devices that basically say, “oh no you don’t”. All of which will make what we do less effective.

So what do we do? Because if we’ve learned anything in advertising over the past hundred years it’s that anyone who bets against the technology will lose.

What we do is what smart agencies and smart clients have always done when faced with a cosmic leap in technology: use it with insight and imagination (often another way of saying “creativity”) to make work that people actually care about. That they think about when all those other things they don’t care about are avalanching them. It’s as simple – and as difficult – as that.

Who said advertising wasn’t rocket science?

I Wish That I Knew What I Know Now: Career advice from people more successful than me

We like to think we have a master plan. We like to think life is linear. We like to think we know what we’re doing while we’re doing it. But we also know that pretty much none of this is true. The number of times we look back on our lives and think “If I’d only…” or “I should have…” or even “What in the name of God was I thinking…?” are, unfortunately, more than we would like to admit to.

So when a journalist asked me “What’s the one piece of career advice you wish you’d gotten when you were first starting out?”, I was certain I would be able to regale her with memories, aphorisms, witticisms and other bon mots that would make me the Oscar Wilde of our age.

I was wrong. I had nothing.

Oh sure, there were things like “Buy Google when it IPOs at $85 in 2004.” Or “Your good relationship with the client does not extend to telling him what you think of his karaoke.” Or even “The flight for the big presentation is at 4, not 430.” But nothing I could really use, nothing I wanted to affix my name to in public (like I’ve just done here. Ahem. Oh well…).

So I passed the buck. I reached out to some of my closest friends — and to some folks I wished were my closest friends — for their two cents. What career advice did they wish they’d had way back when we were all young and firm and comparatively debt-free and able to bounce back from all-nighters with a staggering effortlessness?

What I got was a lot more than I bargained for. Apparently my friends have lots of opinions. And they’re not shy about sharing them. And while the journalist seems to have disappeared as effectively as a late inning lead by my beloved White Sox, the advice I ended up with still remains. And it’s still valuable. And a lot of it had to do with warning their young selves about the future.

“Plan on the inevitability of middle age and age-related obsolescence” said my buddy the designer Gary Hudson, who was not alone in this admonition. And while few were complaining (okay, some were complaining — this is advertising, after all), they were still making it clear that they would have liked to have been made aware of what the future looked like so they could have planned for it. Because you know how good people in advertising are at planning.

And speaking of planning, it was also interesting how many talked about relationships, about how they wished they had made more of an effort to stay connected to people. Not purely from a business networking standpoint (although to be sure there was a lot of that. Like Contagious’s Paul Kemp-Robertson who explained “I must have applied for 500 jobs via the usual listings and recruiters, but I got my first break because I freelanced with someone who just happened to know someone who was setting up a new venture and needed eager young fools to work for free.”) but from a quality-of-life standpoint. MUH-TAY-ZIK | HOF-FER’s John Matejcyzk said “I’ve met so many great people along the way who I’m no longer in touch with. Kinda sad.” And Co:Collective’s Tiffany Rolfe echoed that sentiment, saying “I wished I had done even more of this rather than only focusing on my work and being too busy.”

Of course, “focusing on the work” came in for a large does of career advice, to be sure. The idea that there’s a lot to do, a lot of competition to do it, and a lot of opportunity to piss it all away. “Persistence creates luck and put the fucking time in” was what illustrator Hal Mayforth advised. Leo Burnett’s Director of Talent Acquisition Debbie Bougdanous expressed a similar sentiment, but put it in a way that perhaps is more befitting her position: “Always be the last person to leave. Ask anyone if they need help before you leave at night. Those people always seem to do well.”

Where exactly you put in that effort, however, was also extremely important, and there were a number of people who echoed McCann’s Rob Reilly’s career advice (“Don’t chase the titles or money. Chase the work. The title and money follow.”). And while I completely understood the sentiment — cash is fleeting, but the Alex Bogusky-Rob Reilly-Dan Weiden seal of approval on your resume lasts a lifetime — as someone who has taught literally hundreds of kids who are emerging from universities under mountains of debt, I wondered how realistic it was for anyone starting out today. Because it’s not about telling these kids to suck it up and eat ramen noodles for a couple of years while forgoing the flat for your parents’ basement. It’s about them literally not being able to afford to take the job at the better shop, unless someone is subsidizing them.

And maybe that sounds a little harsh, but honestly, the career advice itself was full of hard — and valuable — truths like that. Like Miami Ad School’s Hillary Lannan, who reminded me that we’re not as precious as we think we are and that the sooner we understand it, the better our careers will be. “We’re all replaceable” she said. “No one cares about you having your job as much as you do.” Oh, if I’d only known that when I was in my twenties…

And still the advice pours in. From people I emailed months ago. From people who already gave me advice and are giving me more. From friends of people who heard about my question and want to weigh in. Good advice. Great advice. Weird advice. Terrible advice.

And, perhaps the best career advice of all, which came from Ogilvy’s George Tannenbaum — “The advice should be, don’t listen to advice.”

Thanks to everyone who took time out of their busy days to provide me with valuable input and insight. And stay tuned, as invariably more career advice is on the way.