Moving to Medium

I’m going to try posting on Medium instead of WordPress.  Here’s the latest:


Categories: Uncategorized

Should I accept that review request?

August 29, 2019 2 comments

A constant question that comes up for academics is: Should I accept that review request?  When I look back on how I managed my time as a junior faculty member, this was one area where I wasted time by saying yes way too often.  Pre-tenure, I nervously said yes to all requests.  I was afraid to say no and didn’t understand when it was prudent to say no. Over time, I’ve developed some rules of thumb for when to say yes that I’d like to share.

First (and this is obvious but it needs to be said), I only accept review requests where the content is in my area of expertise. Sometimes part of the content is in my area and part is not, and I will note that in my review.  (“I’m not an expert on machine learning, and will comment mainly on the HCI parts of this paper.”)

Second, if I have submitted to a venue (conference, journal), then I owe them reviews back. I have generated a need for reviews, and I need to give back.

If I’m overwhelmingly busy, I sometimes say no even if I feel I owe them. But I’ll explain the details, and encourage the editor to ask me again another time. No one wants to hear about your full to-do list, but it is helpful to say something short like “I have a major grant deadline coming up,” and people will understand. It’s also polite to suggest other possible reviewers.

If a venue is low quality and is somewhere I would never consider submitting, I say no.  Pre-tenure when I nervously accepted every request, I spent way too much time reviewing material that I should not have wasted my time on.  If the venue has different social norms and standards than what you are accustomed to, then you also may be too harsh a critic of the work, because you don’t know their standards.

If a venue is not one I’ve ever submitted to or is from another discipline but is high quality, then I ask myself some simple questions: Does this look interesting? Am I the best person to review it? Often when someone from a different field sends me a review request, it’s because they really did their homework—they realize that the work is interdisciplinary and put in some effort to identify an interdisciplinary reviewer. All of us who do interdisciplinary work have received reviews where this was not done and reviewers missed the point. I respect the editor who went to the extra effort to find someone with that missing expertise, and I try to say yes if I can to those requests.

Finally, the older you get, the more likely it is that you know the person asking you to review. And in that case, you need to consider the pros and cons of the request in the context of your relationship to that person.  However, don’t let a friend who is an editor abuse your friendship. It’s OK to politely decline if they ask you too many times too close together.

Do you have different rules for what to agree to review? Leave me a comment!

Categories: academia

The Solution to Free Speech is a Functional Marketplace of Varied Venues

June 12, 2019 3 comments

I believe in free speech. I believe in a free society where I have the right to say something that deeply offends you, and you may say something that deeply offends me. Censorship of the internet in countries like China is disturbing, and other countries (including the United States and the European Union) are slipping towards censorship one tiny step at a time.

At the same time, I believe in the right to be free from harassment. I believe in the right to be free from crazy, false nonsense showing up on my computer screen (if I don’t want it to). I also believe there is such a thing as “false” and “true,” as I explain in a chapter from my forthcoming book Should You Believe Wikipedia? Truth is socially constructed, and we can sometimes make wrong decisions about what we believe to be “true.”   But truth exists, and all we can do is keep working hard to find it.

So how do we balance these competing desires? The answer is zoning.  There need to be places on the internet with different rules for what it’s OK to say, and what standards there are for verification of claims and politeness. Some of those places should be totally open, modulo respecting the most basic laws like the right to honest dealing in business and freedom from liable.  Other places should have standards.

For example, the subreddit r/science has over 21 million subscribers. Posts on r/science must link to a peer-reviewed scientific article published in the last six months in a journal with an impact factor of at least 1.5.  Comments must be about the science, and anecdotes and jokes are not allowed. The volunteer mods delete tons of great, interesting content. But that’s OK, because you can post that content on other subreddits like r/everythingscience or r/sciences, where the rules are laxer.  Reddit is one site on the internet that gets this right.  Different subs have different standards, and you can choose one that suits you or go ahead and create your own (as I suggest in my 1996 paper Finding One’s Own Space in Cyberspace).

When there are multiple spaces with different social norms, we can have a marketplace of ideas.  Parents who are upset about inappropriate content on YouTube should send their kids to watch videos on a site with higher standards. YouTube will never improve its practices if we all don’t vote with our feet.  A marketplace doesn’t work unless people have alternatives and make smart choices.

Unfortunately, some sites have become so big that it’s hard to find meaningful alternatives.  A dozen of my friends have proclaimed recently that they are quitting Facebook because they object to Facebook’s practices.  That’s great—that’s what you should do if you don’t like the site’s policies. But what’s the alternative?  In our research on grassroots groups, Sucheta Ghoshal and I have found that groups who do not agree with Facebook’s policies and find its privacy features insufficient often still use it to publicize their cause because that’s where the people are.  They’re stuck.

One of the imperatives in the revised ACM Code of Conduct (the first update in 25 years) says:

3.7 Recognize and take special care of systems that become integrated into the infrastructure of society.

Even the simplest computer systems have the potential to impact all aspects of society when integrated with everyday activities such as commerce, travel, government, healthcare, and education. When organizations and groups develop systems that become an important part of the infrastructure of society, their leaders have an added responsibility to be good stewards of these systems. Part of that stewardship requires establishing policies for fair system access, including for those who may have been excluded. That stewardship also requires that computing professionals monitor the level of integration of their systems into the infrastructure of society. As the level of adoption changes, the ethical responsibilities of the organization or group are likely to change as well. Continual monitoring of how society is using a system will allow the organization or group to remain consistent with their ethical obligations outlined in the Code. When appropriate standards of care do not exist, computing professionals have a duty to ensure they are developed.

The impossibly hard problem that follows is: What should we do in response to very large platforms that are integrated into the structure of society and fail to be good stewards? Democratic presidential candidate Elizabeth Warren wants to break up big tech, and she may have a point. The implications are headache-inducing.

In the mean time, one thing we all must do is to vote with our feet. To tell platforms who don’t meet our personal standards (Too restrictive of speech? Too unrestrictive? Or just a lousy user interface?) that we won’t use them until they clean up their act. And to support alternative platforms that emerge as they struggle to get started.  The marketplace of ideas can’t work unless there’s an actual, working, competitive marketplace.

Categories: Uncategorized

How the Internet is Broken: Big Questions and Bad Answers

January 7, 2019 5 comments

I’ve always admired people who ask big questions. Who don’t just see the world with received wisdom, but interrogate it. The internet is helping people ask big questions by connecting them with others who question, who wonder. Is our water really safe to drink? Do kids from different ethnic and economic backgrounds really have a chance to succeed economically? Is our education system teaching people the right things? How is corporate control over high-end content production shaping our culture? How are changes in the ecosystem of journalism reshaping what we collectively believe? Asking big questions is a critical first step towards changing the world for the better.

What if, though, the internet is better at helping people to ask big questions than at getting them to correct answers?

I need for a moment to explain what I mean by a “correct” answer. Epistemology and sociology of science teach us that “truth” exists, but our access to it is indirect. What we all agree on is our best attempt at getting to truth.  This is as true for the most elite scientific circles as it is for day-to-day life. Scientists use the process of “peer review” to come to consensus on our best insights into truth, at the moment. And by critiquing and revising one another’s work, our insights continually improve. Knowledge (justified, true belief) is socially constructed.

My contention is that the internet is especially good at helping start this process of social construction of knowledge, but really bad at finishing it. Because it’s too easy to jump to incorrect conclusions.  Especially when you have a community of like-minded friends eager to support those conclusions.

An Example: Vaccination

Here are some important questions: Does the pharmaceutical industry have the best interests of people as its top priority? Is it possible that pressures to improve profits ever outweigh good medical judgment? Is a market-driven model for medical innovation really our best choice? Here are some more: Why are rates of autism rising? Why do children who begin to develop normally sometimes suddenly regress?

Asking those questions is important. Questions about autism are of life-changing significance for families with children on the spectrum, and the bigger questions about the design of our incentive system for medicine are critical for everyone. Groups of people talking together on the internet can help one another to see outside the frame they are trapped in, and ask big questions.  What’s harder is coming to correct conclusions.

Unfortunately, we don’t yet have satisfying answers to those big questions. We don’t know what causes autism (or even how many different medical phenomena make up what we call “autism.” The system that funds medical research has strengths and weaknesses, and the magic profit motive/market don’t always steer innovation in good ways.  For people who desperately need answers to these questions, it’s easy to find someone else who thinks they have an answer. And agreeing with their answer is easy and satisfying. Much easier than the harder truth: we don’t yet have good answers.

It’s easy to see how people might wonder if vaccines cause autism—many children regress developmentally right after receiving vaccinations. It’s  been conclusively proven that this is correlation, not causation. There is absolutely no doubt: vaccines do not cause autism. But it’s emotionally easier to for parents to believe they have a reason and someone to blame than the stark truth that their child’s autism is not yet understood.

Good Questions and Bad Answers

I see this pattern over and over again: people are asking good questions but jumping to believe in bad answers. My students and I have been studying people who believe in “chem trails,” the idea that the visible condensation trails behind airplanes are evidence of deliberate spraying for nefarious purposes (like climate control, or deliberate population reduction). Chem trails believers ask some great questions: Is our environment really safe? Are our systems for monitoring pollution accurate and honest? Are corporations being held accountable for environmental damage they cause? Asking these questions is important. Unfortunately, this particular community of people have grabbed onto something visible (condensation trails behind airplanes) as an explanation for the source of problems. It’s easier to blame something you can see than the harder truth that pollution (like lead in the water in Flint, Michigan) is typically invisible. It’s also easier to settle on a centralized explanation (the government is manipulating us deliberately) than a decentralized one. Changes in the environment are in fact part of a complex socio-technical system made up of human and non-human actors who have diverse motives and degrees of accountability. People leap to easy answers: choosing to believe there is a single, well-understood reason for a phenomenon when there are actually many poorly understood ones that interact in complex ways. Choosing to believe in a single nefarious actor rather than a set of complex, emergent phenomenon with diverse actors who are both well and poorly intentioned. Choosing to believe a provided answer that other people support rather than the harder truth that answers are currently unsatisfying and partial.


The fundamental solution to this problem is education. We need to teach school children to be better at interrogating evidence and deciding (for themselves) what to believe. We need to teach kids to be better critical consumers of information.

We also need to do a better job of teaching kids about the scientific method. A growing group of people now believe the earth is flat. To empirically “prove” the earth is flat, all you have to do is to conduct an experiment to show the curvature of the earth and do it badly.  We need to teach our children to observe the world accurately and debunk pseudo-science.

Finally, the design of the internet could be improved. We desperately need more meta-data. For any given piece of information online, it should be easily possible to ask: what supports this? What people will do is shaped by what is easy to do, so this meta-data needs to be incredibly easy to see. And there need to be multiple sources of meta-data to protect us against deliberate manipulation, so people can choose their gold standard.

The hardest part is how to make this happen. We need a change in the economic system underlying the internet that financially supports the creation of accurate information. We do not currently have a workable “marketplace” for ideas. We need to re-engineer the system so that helping people to find better answers is profitable, and good content spreads faster than bad.

Categories: social computing

Does anyone “own” a commuity? Today’s drama on Kotaku in Action

July 13, 2018 2 comments

Does anyone “own” a community? In the real world, if you own the physical building in which a group meets, you can unilaterally shut it down, closing the doors and refusing access. Online, if you run the server that an online community operates on, you can just shut it down.  What about running a group on a commercial platform, like a subreddit?

Testing the limits of that, today the founder of the subreddit KotakuInAction (KiA), david-me, decided he had had enough. He didn’t like what the group had become, so he removed powers from all the other moderators and took the group “private” (blocking access to new members). He deleted the code that customized the group. He did this unilaterally.

Can he do that? KiA has nearly 100,000 subscribers. It was created and sustained by the contributions of its members and the work of its volunteer moderators. Can the founder just shut it down?

If david-me was running the server, the answer is yes—he could shut down the process and walk away. In this case, the server is hosted on a commercial platform, Reddit. The other moderators asked Reddit to restore the group, which they did.  But do David’s feelings about this matter? We’ll see what the Reddit administrators decide. But the answer to that is probably no.

And that’s probably the right answer. In the 17th century, John Locke wrote that an individual has property rights to what they remove from a state of nature with their labor. That fallen log on public land doesn’t belong to anyone, but once you chop it and stack the fire wood, it’s yours. The metaphorical log stacking of creating and maintaining KiA was done by all its moderators and members, so in a sense it belongs to all of them, and one person shouldn’t be allowed to just take it away.

What that means for founders of groups is that you need to be careful—once your creation is launched into the world, it’s no longer really “yours.” It has a life of its own. Like the creator of the golem or Dr. Frankenstein, david-me is a kind of tragic figure. Whether you agree with his concerns or not, he’s the creator whose work got away from him.

Reddit is a commercial platform, and ultimately must be guided by the company’s mission statement and financial goals. It is not likely to be in their interests to allow one person to shut down an active group with nearly 100k members. But it’s their prerogative to make that decision. It’s up to the admins, not to david-me, the other mods, or the members. And this surfaces a troubling issue. Our “public sphere”—the places where we have meaningful public discussions—is not really public at all. What can be said, what spaces stay open or close down, are a matter of corporate policy with no recourse if the public doesn’t like their decisions. This is not ideal. It’s increasingly clear that we need public spaces that are truly public—owned by the people, and subject to laws like the First Amendment in the US about what can and can not be said.

Combating Human Trafficking with Big Data: Today’s Plot Twist

April 11, 2018 1 comment

When the government shuts down websites that serve the sex industry, the activity doesn’t stop—it just moves. All things considered, it’s constructive for the government to keep shutting things down. It does slow down illegal activity, at least for a while.  But on the other hand, it also slows down law enforcement, who rely on those sites for leads to help rescue victims of human trafficking.

Human trafficking—when the victim is forced to participate or is under age—is different from consensual sex work. Georgia Tech PhD student Julia Deeb-Swihart uses machine learning, network analysis, and techniques from information visualization to create tools to help law enforcement combat trafficking. As the activity moves, our research moves to follow it.

New legislation passed last month gives the government greater authority to prosecute websites that facilitate trafficking, and has the bad guys scared. But give them credit for being clever. Hiding in plain site is the Facebook group “best automotive reviews.” The main purpose of the page is to tell customers of a sex worker review site what the new URL is, each time the site moves. But in the mean time, people have been discussing their automotive needs, like these posts:

Screen Shot 2018-04-11 at 11.21.54 AMScreen Shot 2018-04-11 at 11.21.11 AMScreen Shot 2018-04-11 at 11.20.50 AM

It’s certainly a new challenge for machine learning to detect those posts! The technical challenge of the week. But wherever they go, we’ll keep chasing them.

Categories: trafficking Tags:

Web 3.0: Trust Nothing

December 12, 2017 Leave a comment

I’ve always avoided using the term “Web 2.0.” I think it was supposed to mean that now the web is social—but it was always social to me, so calling it Web 2.0 just made me roll my eyes. To be fair, the birth of Web 2.0 does represent a time when people became more aware of the social nature of the web that was always there.

As much as the phrase “Web 2.0” has always irritated me, I was  found this tweet by Zeynep Tufekci compelling:

Web 1.0: It’ all about information! Web 2.0: Let’s go social! Web 3:0: Weaponized/monetized fraud; bots & trolls polluting the public sphere; organized attention manipulation ops; censorship via information glut, distraction and undermining credibility: Internet of Fake Things!

I believe we will look back on the first fifteen years of this century as kinder, gentler times. Consider for example the idea that we can use the web to gather meaningful public comment on issues. We used to really believe that, didn’t we? But in 2017 when the US Federal Communications Commission called for public comments on the issue of whether to repeal net neutrality rules, more than a million comments were faked. Web 3.0 is the recognition that we live in an adversarial environment, and the source of everything needs to be verified.

We live in the era of Boaty McBoatface. In 2016 when the British government asked for the public to vote on the name of its new polar research ship, the name Boaty McBoatface was voted to the top by internet denizens. (In the end the ship was named the RSS David Attenborough, but Boaty McBoatface was used to name an autonomous underwater vehicle carried by the Attenborough.) The story is funny, except when you ponder the fact that going forwards people are going to seriously hesitate before asking for public input on naming anything.  Certainly they’ll never be so naïve again to promise to use the top-voted name.

We increasingly live in an adversarial online environment. The phishing messages I am getting have gotten better and better over the last year. At Georgia Tech we have an email address for reporting phishing attempts to our network services organization. One phishing attempt I sent in got returned to me with a polite message, “This is actually a real message.”  I sent it back again—look more closely. And in fact it was a phishing attempt. OK, maybe someone was just having a bad day, but these things are getting harder and harder to detect. As I try to coach my parents and children in safe internet use, I have finally moved to simply telling them: Don’t click on a link in an email, ever. No matter how sure you are it’s real. Go type in the address of the website you are trying to reach and access it from there.

The cataclysm approaching us is The Internet of Things. We are increasingly surrounded by devices that can listen to us or change things in our environment, and our ability to keep those things secure is dubious.  Keith Lowell Jensen quipped, “What Orwell failed to predict is that we’d buy the cameras ourselves.” If you think getting brigaded by trolls on Twitter is unpleasant now, imagine when they control your home smart speaker, light switches, and front door.

We will, of necessity, move to a new age of locked-down identities and verified information. The internet users of the 2020s will look back on us as quaint in our openness and trust. And we will find new ways to be open and trusting in smaller groups and more locked-down communications media. If you want to leave a comment for the FCC, please authenticate with your national web ID. It’s a sad but necessary transition. Unless we want to give up on the idea of public commentary altogether, which would be sadder.

The web was always social, but with Web 2.0 the public became more aware of it. The web was always an adversarial environment in need of more security, and with Web 3.0 we, sadly, became more aware of it.

Categories: social computing Tags: ,

Beyond Net Neutrality—All the Places Our Markets are Broken

December 9, 2017 Leave a comment

Net neutrality has always struck me as a weirdly radical idea. Isn’t allowing companies to offer premium services at a higher price how our world works? I don’t particularly like that some people get to squeeze into coach seats and others get first class, but that’s fundamental to free enterprise.

Getting rid of net neutrality rules would be a great idea if markets for internet service worked. Here’s what we’d need: choice and transparency. Each household would have to be able to switch ISPs without a prohibitive cost in time or money. And as you shopped for an ISP, you’d need transparency—to really know what you are buying, and how much it will cost. You would know the speed of service you’re getting and that you won’t be throttled without your knowledge.

Then people would vote with their feet, paying money to companies that offer good service at a fair price. Sure, one ISP might make a deal with Bing to make them faster than Google—but then people who like Google wouldn’t use that ISP. And maybe the managers of that ISP would decide that giving unequal access wasn’t such a good idea after all. People who have more modest needs could buy plans that cost less, and others could get new features not currently available for a premium. The possibility of higher profit margins for premium features would drive innovation.  Except none of this works without choice and transparency.

Are we likely to get either choice or transparency without government regulation?  I’d bet money against it. I’ve already been secretly throttled by more than one ISP, and I had no idea that would happen when I signed up for the plan. There are only a few available service providers in most areas. Neither a wealth of options nor clarity on what you are paying for are likely to happen. For that reason, we need net neutrality rules.

All of this became clearer to me after I taught net neutrality in my Computers and Society class this fall. There’s nothing quite like teaching something to help you understand it. Stepping back from net neutrality, something struck me: There are lots of other places where we don’t have either choice or transparency.

To have a fair, functioning market, we need good information. But good information is surprisingly rare, as any one of the nearly half a million people who bought Volkswagen cars with falsified environmental data can tell you.  Even if a company isn’t committing deliberate fraud like VW, how can you know how reliable that car is really going to be? How can you tell if that organic produce is any healthier for you than the conventional produce that costs much less?  How can you tell if the doctor you went to is competent if there are no easily accessible statistics on outcomes for past patients?  The structure of our society is built on the idea of fair markets, but to a large degree those fair markets don’t exist because of lack of information.

Democrats tend to take a consumer protection view of regulation—the government should actively work to protect citizens. Republicans tend to take a free market view—let companies do what they want, and feedback and demand from consumers will drive innovation. Whichever view you take, here’s something we all can agree on: that transparency is fundamental.  Whether we take a free market or consumer protection approach, nothing will work without the availability of accurate information in a form people can understand.

Some people would argue that market forces will lead to the production of that information—but that’s simply not true. For example, it’s immensely useful for consumers to know how many calories are in foods they order at restaurants. (The cheeseburger has half the calories of the Caesar salad at Cheesecake Factory—who would’ve guessed?)  You could say, if consumers value that information, then they will only patronize restaurants with calories on menus. But did that actually ever happen? Of course not. Not until laws were passed requiring large chain restaurants to put calorie information on menus. Starting in May 2018 in the US, consumers can make smarter choices, and market pressure can lead to healthier offerings if that’s what customers want. The whole system doesn’t work without the information. Information is a prerequisite for the formation of a fair market, not a consequence.

In the absence of a fair market, net neutrality solves the problem. And if what we value is innovation, it fosters innovation in an intriguing way: new companies have an easier time getting a start when they don’t need to pay a premium for bandwidth.  It’s a strangely radical idea, but I like it. And I wonder if there are other areas where ideas like this would be useful. Healthcare neutrality, perhaps?

I’m feeling like my one semester of high-school economics is not adequate preparation to write about this subject. If any economists out there want to correct errors or add some nuance, please leave me a comment!

Computing Challenges We Can’t Solve with Research: Simple Ideas and Platform Lock

September 22, 2017 3 comments

Critical problems for computing and society are increasingly economic. It’s not that we don’t know how to fix them—it’s that a purely market economy model to fund software development doesn’t support some simple things that would make the world a better place.

My students and I research and design collaborative computing systems. We start by studying existing groups and trying to understand their needs. For example, Kurt Luther studied animators trying to work on collaborative projects with teams of fifty or more artists. Based on what he learned, he built a project management tool for creative collaboration that tried to balance decentralized and centralized creative decision making. Jill Dimond worked with the nonprofit Hollaback, which is trying to end street harassment, and helped redesign their website with a federated structure that helped them to rapidly spread around the world. In these cases, as researchers were able to understand the problem, and innovate to solve it.

Lately though, we’ve been exploring problems and finding solutions that are straightforward but impossible to realize. The problems are two-fold: things that are too simple to make for meaningful research problems, and there is a barrier of platform lock.

For example, a team of GT students led by Hayley Evans found that people trapped in the economic crisis in Venezuela are increasingly using Facebook Groups for barter of basic necessities. It’s no longer possible to buy diapers at a fair price, but you can trade staples like flour for them. However, people are still price gouging and duping others with fake products. The solutions here are simple—a price comparison tool like the one Stubhub provides for ticket sales could give everyone a calibration on what exchanges are fair. A reputation system like the one on ebay could help stop scammers. If people have public reputations, then individuals can choose not to trade with someone who has a negative reputation and be extra careful with a new account with no history. These are established solutions, but it’s not clear who can build these tools for the Venezuelans, even though the need is desperate. It’s certainly not research—it’s too easy. To do something as a research project, we need something that we can raise grant funding for and publish about—we need to innovate. But solving this problem doesn’t necessarily need much innovation.

The second part of the problem is platform lock. Venezuelans are using the platform they are already on—Facebook. It would be hard to imagine bringing people to a custom platform, even if we had the time and resources to try to build one. And although you can make some small changes to platforms like Facebook with browser scripting, those solutions are limited and fragile.

Here’s another example. In 2012, Dimond did a study of the use of mobile and social computing by survivors of domestic violence. Her research concluded that there are some simple things that could really help people in this situation. For example, features developed as parental controls could be adapted to provide protection from harassment for adults. But we have the same problems again—the solutions are largely so simple as to not qualify as research, and they’d have to be implemented by mobile carriers. What is missing is societal—why can’t we find resources to do these simple things? A purely market-based model for software development falls short of meeting people’s needs.

These issues are going to multiply. As software reaches into more and more nooks and crannies of everyday life, we need an economic model that can deliver needed features that don’t make sense from a pure profit motivation. This will involve more activity by software nonprofits like Mozilla that design tools for the public good. It will further require better computer science education and extensible platforms so that people can develop solutions for themselves.

What if we all insisted on reasonable NDAs?

August 20, 2017 2 comments

Next week I am attending a mini-conference in which a big tech company (I’ll call it BTC) has invited a group of academics to advise them. Everyone attending was asked to sign a non-disclosure agreement (NDA). The NDA I was sent initially didn’t define what was confidential, and had no time limit.  So basically they’re asking everyone to keep secret who-knows-what forever. Does that make sense?

How can you protect confidential information if you don’t even know what is confidential? A fair NDA needs to spell it out. This is called a “marking requirement.” Any tangible materials containing proprietary information shared with you should be marked “confidential.” Ideally also, the agreement should say that if confidential information is disclosed orally, they will follow-up with a copy in tangible form marked confidential within a few weeks after the disclosure. That last part can be harder to get companies to agree to, because it’s a hassle.

Second, a fair NDA should have an end date. It’s not reasonable to ask you to assume a lifelong obligation, is it? They’re not going to tell me the formula for Coca-Cola—it’s stuff that changes rapidly. At the speed that things change in high tech, a three-year limit is fair. Five years at most.

I told my hosts at BTC that I’d please like some changes to the NDA, and they graciously complied. The back-and-forth process between their lawyers and my university’s lawyers took so long, I almost ended up not going to the event. They were reasonable, and the result is fair. But here’s my question: why doesn’t everyone always ask for more reasonable NDAs? If we all did, then they wouldn’t be sending out the unfair versions in the first place.

Companies keep asking people to sign ridiculous non-disclosure agreements, because folks sign them without objecting. If we all insist on reasonable NDAs, this will no longer be a problem.



Categories: Uncategorized
%d bloggers like this: