Every once in a while when someone learns that one of the hats I wear is that of a freelance corporate writer, that person becomes intrigued and asks me to help write a book about a fantastic true story. This doesn’t mean that the person’s story is boring, uninspiring, or not helpful. It might be a great story. However, it simply demonstrates a common misunderstanding about the publishing world. That misunderstanding is that publishing a book is an easy, fun, inexpensive process that is guaranteed to succeed if you simply have a good writer.

The reality is that the world of book publishing is much larger and complex than most folks can imagine. Book publishing is many things but one thing it is not is a quick easy path to riches and success. If indeed you have a book in you, then write it! Just don’t expect that book to be your meal ticket.

First and foremost, you need to determine whether a book is an effective tactic in a much larger strategy, and if so, then you need to coordinate many additional tactics throughout that strategy to make it all work. Some folks end up doing this very successfully and others fail miserably. It is best to know which crowd you are in before you begin.

Anna Sproul-Latimer is a literary agent with the Ross Yoon Agency. I like the way she summarizes these realities (Kara Gebhart Uhl, “Meet the Agent” Writer’s Digest. March/April 2017, p. 17):

Don’t think a book is going to give you a platform. You’re going to have to bring your platform to a book.

Now that’s good advice for any aspiring writer!


A regular pastime of mine is making fun of artificial intelligence. AI can be a wonderful thing in as far as it goes, in as much as it can do, and in as much as we allow it to do. AI is powerful because–under the right circumstances–it can (apparently) replicate human thought quickly and easily. Simultaneously, AI has intrinsic weaknesses due to its obvious lack of human qualities such as empathy, consciousness, judgment, free will, and holistic thinking.

Alan Turing is considered to be the father of modern computer science. Although he died in 1954 his “Turing test” continues to fuel the constant quest of computer scientists in their goal of creating a computer so powerful that you or I would find it indistinguishable from a human being. The Turing test is said to be passed if in our texting to and from that computer, we would be convinced that we were communicating with another person instead of just a machine. Hence, the constant quest to write code that will pass the Turing test.

Lately some within the computing community have proposed that the Turing test has outlived its usefulness. This position derives from the growing idea that the Turing test itself is based more on deception than true intelligence or thought. Gary Marcus is the director of Uber AI Labs and a professor of psychology and neural science at New York University. He recently wrote about his team’s endeavors into this fascinating argument.

Two points from Marcus’ team especially capture my attention. One involves the debunking of the very idea that just one test (the Turing test) is genuinely capable of assessing AI (Marcus, Gary. “Am I Human?” Scientific American. March 2017, pp. 58–63):

Initially we focused on finding a single test that could replace Turing’s. But we quickly turned to the idea of multiple tests because just as there is no single test of athletic prowess, there cannot be one ultimate test of intelligence.” (p. 63)

The other point is the intrinsic and pervasive superiority of your brain or mine over any AI system. This of course again underscores the idea that AI may be an impossible goal:

Anyone who has ever tried to program a machine to understand language has quickly realized that virtually every sentence is ambiguous, often in multiple ways. Our brain is so good at comprehending language that we do not usually notice.

Well, back to one of my favorite pastimes. The next time you find yourself wishing that you were smarter than a computer, please stop. The truer “wish” would be that a computer was smarter than you. And as we all know, not all wishes come true.


David Sax wrote a book called Revenge of Analog: Real Things and Why They Matter. He addresses the occasional human tendency to become fed up with the inherent challenges of living in a digital world that prods us away from analog. It is a subject that I think we must always be assessing because it constantly affects everyone.

Steve Wieberg in his review of the book does an excellent job summarizing Sax’s fundamental concern (“Analog Strikes Back: In a Digital World, We Cling to Vinyl and Paper, Author Says” The Kansas City Star, 12/18/16, pp. 1D, 8D.):

People, he says, are craving real, tangible things and experiences and not always something stored in a cloud. Many prefer turning the pages of a book to reading on a backlit screen, or shopping in stores over purchases with a click. They want to hold objects in their hands. They want human interaction. Sometimes, they just need an escape from screens and keyboards.”  (p. 8D)

I agree with Sax’s fundamental concern. Simultaneously I love what I can do with technology and I love what technology can do for me. I would not want to be without it. The key to this dichotomy is balance.

It is only when we simultaneously maintain our appreciation of analog and our appreciation of technology that we are then prepared to filter selectively in the moment. How that works for you might be very different from how that works for me.

In a world that increasingly is technology, we need to keep surfing the wave, but we also need to remember how to get back to the beach.


If we reflect over decades, we can always identify certain brand names that are indelibly imprinted into our memory. Even as a child, I can remember certain brands that simply captured my imagination and admiration. Many of those brands hold that same position on my metaphorical mantle today. Why does this happen? It boils down to positive experiences, ideas, images, and associations with that brand.

Tim Ferriss is the author of Tools of Titans: The Tactics, Routines, and Habits of Billionaires, Icons, and World-Class Performers. Ferris has come up with certain sacred rules of branding, all of which are worth your reading. Below are a couple snippets that stood out to me as clearly universally true to my experiences and very likely your experiences too:

  • If everyone is your market, no one is your market. . . . In a social-sharing-driven world, cultivate the intense few instead of the lukewarm many.
  • Branding is a side effect of consistent association. . . . Put good business first, and good “brand” will follow.

As I reflect upon the best brands that have stuck with me throughout my life, I do believe that it has been that “consistent association” that did the trick. Consistency implicitly sticks.


We use predictive mathematical models constantly for all kinds of systems, behaviors, processes, and devices. We use them to try to predict future events. In some of these situations the models work well. In other situations, the models are lacking. And in some models, we simply haven’t had enough of the future yet to validate them.

That might be the case with something called the Social Cost of Carbon (SCC). From the Environmental Protection Agency’s Web site :

[the social cost of carbon] is meant to be a comprehensive estimate of climate change damages and includes changes in net agricultural productivity, human health, property damages from increased flood risk, and changes in energy system costs, such as reduced costs for heating and increased costs for air conditioning. However, given current modeling and data limitations, it does not include all important damages.

Although the SCC many believe arises from noble concerns for our future on the planet, perhaps the model behind it is a bit of a stretch. David Kreutzer, a senior research fellow in energy and climate change expresses some skepticism at a recent energy summit (Matthew Phillips, Mark Drajem, and Jennifer A. Dlouhy. “How Climate Rules Might Fade Away” Bloomberg Businessweek. 12/19/16–12/25/16, pp. 6–7):

Believe it or not, these models look out to the year 2300. That’s like effectively asking, ‘If you turn your light switch on today, how much damage will that do in 2300?’ That’s way beyond when any macroeconomic model can be trusted.” (p. 7)

Predictive models can be great tools, but every tool is useless beyond its limits.