All of Darmani's Comments + Replies

I went from being at a normal level of hard-working (for a high schooler under the college admissions pressure-cooker) to what most would consider an insane level.

The first trigger was going to a summer program after my junior year where I met people like @jsteinhardt who were much smarter and more accomplished than me. That cued a senior year of learning advanced math very quickly to try to catch up.

Then I didn't get into my college of choice and got a giant chip on my shoulder. I constantly felt I had to be accomplishing more, and merely outdoing my peer... (read more)

In Chinese, the words for "to let someone do something" and "to make someone do something" are the same, 让 (ràng). My partner often makes this confusion. This one it did not get even after several promptings, up until I asked about the specific word.

Then I asked why both a Swede and a Dane I know say "increased with 20%" instead of "increased by 20%." It guessed that it had something to do with prepositions, but did not volunteer the preposition in question. (Google Translate answered this; "increased by 20%" translates to "ökade med 20%," and "med" common... (read more)

More discussion here: https://www.lesswrong.com/posts/gW34iJsyXKHLYptby/ai-capabilities-vs-ai-products

You're probably safe so long as you restrict distribution to the minimum group with an interest. There is conditional  privilege if the sender has a shared interest with the recipient. It can be lost through overpublication, malice, or reliance on rumors.

A possible solution against libel is to provide an unspecific accusation, something like "I say that X is seriously a bad person and should be avoided, but I refuse to provide any more details; you have to either trust my judgment, or take the risk

 

FYI, this doesn't actually work. https://www.virginiadefamationlawyer.com/implied-undisclosed-facts-as-basis-for-defamation-claim/

2Viliam1y
Damn. Okay, what about "person X is banned from our activities, we do not explain why"?

It does not take luck to find someone who can help you stare into the abyss. Anyone can do it. 

It's pretty simple: Get a life coach.

That is, helping people identify, face, and reason through difficult decisions is a core part of what life coaches do. And about all the questions that Ben cobbled together at the end (maybe not "best argument for" — I don't like that one) can be found in a single place: coaching training. All are commonly used by coaches in routine work.

And there are a lot more tools than the handful than the ones Ben found. These questi... (read more)

Quote for you summarizing this post:

“A person's success in life can usually be measured by the number of uncomfortable conversations he or she is willing to have.”

— Tim Ferriss

4Eli Tyre7mo
I think Mark Zuckerberg said this on the Tim Ferris podcast, not Ferris himself?

This post culminates years of thinking which formed a dramatic shift in my worldview. It is now a big part of my life and business philosophy, and I've showed it to friends many times when explaining my thinking. It's influenced me to attempt my own bike repair, patch my own clothes, and write web-crawlers to avoid paying for expensive API access. (The latter was a bust.)

I think this post highlights using rationality to analyze daily life in a manner much deeper than you can find outside of LessWrong. It's in the spirit of the 2012 post "Rational Toothpast... (read more)

2Ben Pace1y
This afternoon.

https://galciv3.fandom.com/wiki/The_Galactic_Civilizations_Story#Humanity_and_Hyperdrive

I've hired (short-term) programmers to assist on my research several times. Each time, I've paid from my own money. Even assuming I could have used grant money, it would have been too difficult. And, long story short, there was no good option that involved giving funds to my lab so they could do the hire properly.

Grad students are training to become independent researchers. They have the jobs of conducting research (which in most fields is mostly not coding), giving presentations, writing, making figures, reading papers, and taking and teaching classes. Their career and skillset is rarely aligned with long-term maintenance of a software project; usually, they'd be sacrificing their career to build tools for the lab.

2Gunnar_Zarncke2y
Many grad students go into the free market later anyway so there should be some that fit.

This is a great example of the lessons in https://www.lesswrong.com/posts/tTWL6rkfEuQN9ivxj/leaky-delegation-you-are-not-a-commodity

Really appreciate this informative and well-written answer. Nice to hear from someone on the ground about SELinux instead of the NSA's own presentations.

I phrased my question about time and space badly. I was interested in proving the time and space behavior of the software "under scrutiny", not in the resource consumption of the verification systems themsvelves.

 

LOL!

I know a few people who have worked in this area. Jan Hoffman and Peng Gong have worked on automatically inferring complexity.  Tristan Knoth has gone the other way, including resource bounds in specs for program synthesis. There's a guy who did an MIT Ph. D. on building an operating system in Go, and as part of it needed an analyzer... (read more)

I must disagree with the first claim. Defense-in-depth is very much a thing in cybersecurity. The whole "attack surface" idea assumes that, if you compromise any application, you can take over an entire machine or network of machines. That is still sometimes true, but continually less so. Think it's game over if you get root on a machine? Not if it's running SELinux.

 

Hey, can I ask an almost unrelated question that you're free to ignore or answer as a private message OR answer here? How good is formal verification for time and space these days?

 

I... (read more)

2jbash2y
I phrased my question about time and space badly. I was interested in proving the time and space behavior of the software "under scrutiny", not in the resource consumption of the verification systems themsvelves. It would be nice to be able to prove things like "this program will never allocate more than X memory", or "this service will always respond to any given request within Y time".

Hmm. It looks like my reply notifications are getting batched now. I didn't realize I'd set that up.

I've reordered some of this, because the latter parts get into the weeds a lot and may not be worth reading. I advise that anybody who gets bored stop reading there, because it's probably not going to get more interesting.

For background, I haven't been doing security hands-on for the last few years, but I did it full time for about 25 years before that, and I still watch the space. I started out long enough ago that "cyber" sets my teeth on edge...

State of

... (read more)

I agree with about everything you said as well as several more criticisms along those lines you didn't say. I am probably more familiar with these issues than anyone else on this website with the possible exception of Jason Gross.

Now, suppose we can magic all that away. How much then will this reduce AI risk?

2jbash2y
As others have written, I think you have to get very close to perfection before you get much of a win against the kind of AGI everybody on here is worried about, because you have to assume that it can find very subtle bugs. Also, if you assume it has access to the Internet or any other large selection of targets, it will attack the thing that has not been hardened... so you have to get everything hardened before this very smart adversary pops up. But it sure can't hurt. And it would help other stuff, too. Hey, can I ask an almost unrelated question that you're free to ignore or answer as a private message OR answer here? How good is formal verification for time and space these days?

I don't see what this parable has to do with Bayesianism or Frequentism. 

 

I thought this was going to be some kind of trap or joke around how "probability of belief in Bayesianism" is a nonsense question in Frequentism.

I do not. I mostly know of this field from conversations with people in my lab who work in this area, including Osbert Bastani. (I'm more on the pure programming-languages side, not an AI guy.) Those conversations kinda died during COVID when no-one was going into the office, plus the people working in this area moved onto their faculty positions.

I think being able to backtrace through a tree counts as victory, at least in comparison to neural nets. You can make a similar criticism about any large software system.

You're right about the random forest there;... (read more)

I think you're accusing people who advocate this line of idle speculation, but I see this post as idle speculation. Any particular systems you have in mind when making this claim?

I'm a program synthesis researcher, and I have multiple specific examples of logical or structured alternatives to deep learning

 

Here's Osbert Bastani's work approximating neural nets with decision trees, https://arxiv.org/pdf/1705.08504.pdf .  Would you like to tell me this is not more interpretable over the neural net it was generated from?

 

 

 

Or how abou... (read more)

9Ash Gray2y
I think you and John are talking about two different facets of interpretability. The first one is the question of "white-boxing:" how do the model's internal components interrelate to produce its output? On this dimension, the kind of models that you've given as examples are much more interpretable than neural networks. What I think John is talking about, I understand as "grounding." (Cf. Symbol grounding problem) Although the decision tree (a) above is clear in that one can easily follow how the final decision comes about, the question remains -- who or what makes sure that the labels in the boxes correspond to features of the real world that we would also describe by those labels? So I think the claim is that on this dimension of interpretability, neural networks and logical/probabilistic models are more similar.

Interesting. Do you know if such approaches have scales to match current SOTA models? My guess would be that, if you had a decision tree that approximated e.g., GPT-3, that it wouldn’t be very interpretable either.

Of course, you could look at any give decision and backtrace it through the tree, but I think it would still be very difficult to, say, predict what the tree will do in novel circumstances without actually running the tree. And you’d have next to no idea what the tree would do in something like a chain of thought style execution where the tree so... (read more)

I'm a certified life coach, and several of these are questions found in life coaching.
 

E.g.:

Is there something you could do about that problem in the next five minutes?

Feeling stuck sucks. Have you spent a five minute timer generating options?

What's the twenty minute / minimum viable product version of this overwhelming-feeling thing?

These are all part of a broader technique of breaking down a problem.  (I can probably find a name for it in my book.) E.g.: Someone comes in saying they're really bad at X, and you ask them to actually rate their sk... (read more)

I realize now that this expressed as a DAG looks identical to precommitment.

Except, I also think it's a faithful representation of the typical Newcomb scenario.

Paradox only arises if you can say "I am a two-boxer" (by picking up two boxes) while you were predicted to be a one-boxer. This can only happen if there are multiple nodes for two-boxing set to different values.

But really, this is a problem of the kind solved by superspecs in my Onward! paper. There is a constraint that the prediction of two-boxing must be the same as the actual two-boxing. Traditi... (read more)

Okay, I see how that technique of breaking circularity in the model looks like precommitment.

 

I still don't see what this has to do with counterfactuals though.

2Chris_Leong2y
"You decide either "I am a one-boxer" or "I am a two-boxer," the boxes get filled according to a rule, and then you pick deterministically according to a rule. It's all forward reasoning; it's just a bit weird because the action in question happens way before you are faced with the boxes." So you wouldn't class this as precommitment?

I don't understand what counterfactuals have to do with Newcomb's problem. You decide either "I am a one-boxer" or "I am a two-boxer," the boxes get filled according to a rule, and then you pick deterministically according to a rule. It's all forward reasoning; it's just a bit weird because the action in question happens way before you are faced with the boxes. I don't see any updating on a factual world to infer outcomes in a counterfactual world.

"Prediction" in this context is a synonym for conditioning.  is defined as .

If ... (read more)

2Chris_Leong2y
Everyone agrees what you should do if you can precommit. The question becomes philosophically interesting when an agent faces this problem without having had the opportunity to precommit.

While I can see this working in theory, in practise it's more complicated as it isn't obvious from immediate inspection to what extent an argument is or isn't dependent on counterfactuals. I mean counterfactuals are everywhere! Part of the problem is that the clearest explanation of such a scheme would likely make use of counterfactuals, even if it were later shown that these aren't necessary.

 

  1. Is the explanation in the "What is a Counterfactual" post linked above circular?
  2. Is the explanation in the post somehow not an explanation of counterfactuals?


The

... (read more)
3Chris_Leong2y
Is the explanation in the post somehow not an explanation of counterfactuals? Oh, it's definitely an explanation of counterfactuals, but I wouldn't say it's a complete explanation of counterfactuals as it doesn't handle exotic cases (ie Newcomb's). I added some more background info after I posted the bounty and maybe I should have done that originally, but I posted the bounty on LW/alignment forum and that led me towards taking a certain background context as given, although I can now see that I should have clarified this originally. Is the explanation in the "What is a Counterfactual" post linked above circular? It seems that way, although maybe this circular dependence isn't essential. Take for example the concept of prediction. This seems to involve imagining different outcomes. How can we do this without counterfactuals? I guess I have the same question with interventions. This seems to depend on the notion that we could intervene or we could not intervene. Only one of these can happen - the other is a counterfactual.

I'm having a little trouble understanding the question. I think you may be thinking of either philosophical abduction/induction or logical abduction/induction.

 

Abduction in this article is just computing P(y | x) when x is a causal descendant of y. It's not conceptually different from any other kind of conditioning.

In a different context, I can say that I'm fond of Isil Dillig's thesis work on an abductive SAT solver and its application to program verification, but that's very unrelated.

I'm not surprised by this reaction, seeing as I jumped on banging it out rather than checking to make sure that I understand your confusion first. And I still don't understand your confusion, so my best hope was giving a very clear, computational explanation of counterfactuals with no circularity in hopes it helps.

Anyway, let's have some back and forth right here. I'm having trouble teasing apart the different threads of thought that I'm reading.

 

After intervening on our decision node do we just project forward as per Causal Decision Theory or do we w

... (read more)
2Chris_Leong2y
  While I can see this working in theory, in practise it's more complicated as it isn't obvious from immediate inspection to what extent an argument is or isn't dependent on counterfactuals. I mean counterfactuals are everywhere! Part of the problem is that the clearest explanation of such a scheme would likely make use of counterfactuals, even if it were later shown that these aren't necessary. The best source for learning about FDT is this MIRI paper, but given its length, you might find the summary in this blog post answers your questions more quickly. The key unanswered question (well, some people claim to have solutions) in Functional Decision theory is how to construct the logical counterfactuals that it depends on. What do I mean by logical counterfactuals? MIRI models agents as programs ie. logic so that imagining an agent taking an action other than it takes become imagining logic being such that a particular function provides a particular output on a given input than it does. Now I don't quite agree with the logical counterfactuals framing, but I have been working on the question of constructing appropriate counterfactuals for this situation.

Oh hey, I already have slides for this.

 

Here you go: https://www.lesswrong.com/posts/vuvS2nkxn3ftyZSjz/what-is-a-counterfactual-an-elementary-introduction-to-the

 

I took the approach: if I very clearly explain what counterfactuals are and how to compute them, then it will be plain that there is no circularity. I attack the question more directly in a later paragraph, when I explain how counterfactual can be implemented in terms of two simpler operations: prediction and intervention. And that's exactly how it is implemented in our causal probabilis... (read more)

2Chris_Leong2y
Hey Darmani, I enjoyed reading your post - it provides a very clear explanation of the three levels of the causal hierarchy - but it doesn't seem to really engage with the issue of circularity. I guess the potential circularity becomes important when we start asking the question of how to model taking different actions. After intervening on our decision node do we just project forward as per Causal Decision Theory or do we want to do something like Functional Decision Theory that allows back-projecting as well? If it's the latter, how exactly do we determine what is subjunctively linked to what? When trying to answer these questions, this naturally leads us to ask, "What exactly are these counterfactual things anyway?" and that path (in my opinion) leads to circularity. These issues seem to occur even in situations when we know perfectly how to forwards predict and where we are given sufficient information that we don't need to use abduction. Anyway, thanks for your submission! I'm really happy to have at least one submission already.

"Many thousands of date problems were found in commercial data processing systems and corrected. (The task was huge – to handle the work just for General Motors in Europe, Deloitte had to hire an aircraft hangar and local hotels to house the army of consultants, and buy hundreds of PCs)."

Sounds like more than a few weeks.

Was it founded by the Evil Twin of Peter Singer?

https://www.smbc-comics.com/comic/ev

3trevor2y
Do you mean Peter Stinger?

Define "related?"

Stories of wishes gone awry, like King Midas, are the original example.

I've definitely looked at it, but don't recall trying it. My first questions from looking at the screenshots are about its annotation capabilities (e.g.: naming functions, identifying structs) and its UI (IDA highlighting every use of a register when you mouse over it is stupendously useful).

This reminds me of how I did the background reading for my semantic code search paper  ( http://www.jameskoppel.com/files/papers/yogo.pdf ). I made a list of somewhat related papers, printed out a big stack of them at a time, and then for each set a 7.5 minute timer for each. By the end of that 7.5 minutes, I should be able to write a few sentences about what exact problem it solves and what its big ideas are, as well as add more cited work / search keywords to expand my list of related papers. I'd often need to give myself a few extra minutes, but I ... (read more)

1drossbucket2y
I like this idea a lot. I often do pomodoros but there seems to be a lot of potential for other uses of timers while working.

++++ 

Anytime I try a new language, first question is "Is there a JetBrains IDE or plugin for it?"

Bryan Caplan has been creating his "economics graphic novels" using an old "comic creator" software. He has a valid license, but they company that makes it went out decades ago, and the license server no longer exists. So I disabled the license-server check for him.

When I worked in mobile, I did it frequently. Customer would call us and say our SDK isn't working. I'd download their app off the app store, decompile it, and figure out exactly how they're using us.

It's also surprisingly frequent how often I want to step through a library or program that I'm u... (read more)

Software: Omnigraffle

Need: Making figures and diagrams (e.g.: for scientific papers)

 

Other software I've tried: Sketch, Illustrator, tikz

 

Omnigraffle has beautiful defaults, and makes it very fast to create shapes and diagrams that connect. It can make crossing edges look pretty and clear instead of a mess. Illustrator gives you a lot more flexibility (e.g.: strokes whose width gradually changes, arbitrary connection points for arrows), but you can be way faster at making figures with Omnigraffle.

Use Illustrator for making art and posters. Use Sketch (or Figma) for mocking up UIs. Use Omnigraffle for making figures.

Software: IDA

Need: Binary reverse-engineering

Other programs I've tried: ghidra, OllyDbg, Hopper

IDA is fast and well-featured. I've had multiple times where my process of having questions about a binary to figuring out the answer took minutes.

Hopper has a nicer UI, but works on fewer executables and does not analyze the binary as well.

IDA gets criticized for "having an interface designed by programmers," but ghidra is much worse in that regard. "A giant Java program written by the government" describes it well. ghidra supposedly has a collaboration mode, bu... (read more)

2PointlessOne2y
Have you tried radare2? If you have, how does it stack against IDA?
2oge2y
Just out of curiosity: what kinds of binaries do you need to reverse-engineer on a regular basis?

I'd appreciate if someone touched on HR software and CRMs for small businesses.

Also, collaborative document editing that isn't owned by Google.

4Aryeh Englander2y
My wife specializes in this and she says that's like asking what clothing should I buy. It depends on a lot of factors plus an element of taste. If you want you can message me - my wife says she's happy to help you work through the options a bit for free.

I've been running exercises like the one described here for nearly 5 years as part of my business ( http://jameskoppelcoaching.com/ ). They take the name "design exercises." It's done in both live and canned formats. Chief addition is that, in the live versions, the new features are chosen antagonistically.

 

Dagon in another thread claims "Making something that's maintainable and extensible over many years isn't something that can be trained in small bites." My long list of testimonials begs to differ.

Am incline to agree, but I want to add that security is all connected. There are several direct causal paths from compromised user data to compromised dev workstation (and vice versa).

 

Do you think the point of adding nuclear close calls isn't to move public policy into a direction that's less likely to produce a nuclear accident? That's a political purpose. It's not party political but it's political.  

 

Of course I believe it serves that purpose. I also believe that the most recent edit in all of Wikipedia at my time of writing, deleting a paragraph from the article on Leanna Cavanagh (a character from some British TV show I'd never heard of) serves to decrease the prominence of that TV show, which will weaken whatever m... (read more)

6ChristianKl3y
While bringing less attention to Yorkshire might be an effect of the edit, it's not the purpose of the edit. Purpose is about intent. FLI is an organization that has a mission. Part of that mission is to get governments to act better in regards towards X-risk.  My point here isn't criticism. It's understanding why the thing that happened happened.  I personally have no problem with either what Vipul did or what FLI did here. If you however want to understand why there was the opposition that's there and edit in a way that's less likely to face opposition it makes sense to understand why the scenario played out the way it did. That doesn't mean they weren't collateral damage.

It was paid-editing for a political agenda. From an EA perspective paying someone to do paid editing or do political lobbying is completley fine.  On the other hand you have the money isn't speech side that considers using money to do lobbying or get someone change Wikipedia according to their political interests bad. 

 

Putting aside that a volunteer project by a non-profit is not paid, and I take some issue with arguments that improvements to the page on nuclear close calls is "political":

 

I mean that some individuals later in this grou... (read more)

1ChristianKl3y
Do you think the point of adding nuclear close calls isn't to move public policy into a direction that's less likely to produce a nuclear accident? That's a political purpose. It's not party political but it's political.   There was an EA project where Vipul paid a few people to write EA related Wikipedia content on a variety of EA issues. This triggered resistence from people like Jytdog who see it as their mission to prevent commercial and other interests from infringing into Wikipedia. While of course not all EA people involved in that episode were paid it's part of the reason why some admins were very protective about EA articles.  If you look at the account behind the edit you point to it's an account that mostly edits articles for a single cause. Given what you said it's also an FLI associated account that edits FLI pages without doing any disclosure about how the account owner relates to FLI. That's why it's likely perceived as being an account by someone with an agenda that they are not open about. It doens't like like a person who comes regularly to Wikipedia when they browse the web and edits something when they see an error. I first thought that you were talking about something that happened later and not back in 2015. Nobody blocked https://en.wikipedia.org/wiki/List_of_nuclear_close_calls from existing. 

The talk of an admin who controlled those pages with an iron fist came from before this project existed, presumably encountered by affiliates who had tried to edit in good faith exactly as you've advocated, but were shut down.

We were far from the first or only group that had Wikipedia-editing sessions. I've walked past signs at my university advertising them for other groups. Ours was quite benign. I'm reading some of the discussion from back then; their list included things like adding links for the page on nuclear close calls.



I've seen articles on hot-bu... (read more)

3ChristianKl3y
It was paid-editing for a political agenda. From an EA perspective paying someone to do paid editing or do political lobbying is completley fine.  On the other hand you have the money isn't speech side that considers using money to do lobbying or get someone change Wikipedia according to their political interests bad.  While there might have been multiple admins that opposed the EA effort at that time Jytdog was one of the central admins and isn't around any longer because misbehavior in his quest to fight against political and commercial interests pushing their point of view onto Wikipedia.  From the Wikipedia perspective there's a difference of a Wikipedia user group that does a Wikipedia-editing session together which is great and an organization having a project to change Wikipedia according to their agenda.  If you start a WikiProject X-risk and then coordinate within that WikiProject that's democratic participation. If an organization coordinates internally and then tries to push it's views onto Wikipedia that's different. While I would prefer that such an FLI project wouldn't face opposition, I do understand the other side. The quest of protecting Wikipedia against organizations who try to push their agenda on Wikipedia by paying money to hire people has value. The way around this is general democratic participation. Inclusionism against exclusionism is a constant fight in Wikipedia and I don't think opting out of it because there are many exclusionist on Wikipedia is a good idea. I think Wikipedia is central enough that it's worth for more people to engage with it.

During my stint volunteering with the FLI, I worked on a project  to improve Wikipedia's coverage of existential risk. I don't remember the ultimate outcome of the project, but we were up against an admin who "owned" many of those pages, and was hostile to many of FLI's views.

This article, at least by appearances, is an excellent account of the problems and biases of Wikipedia: https://prn.fm/wikipedia-rotten-core/

During my stint volunteering with the FLI, I worked on a project  to improve Wikipedia's coverage of existential risk. 

An organization having a project to change Wikipedia is not what Wikipedia is about and triggers immune system response of Wikipedia. 

The way to engage with Wikipedia is to start by doing a bit when you naturally come across a Wikipedia article with issues. 

For the whole EA endevear that incountered resistence, it's worth noting that Jytdog who was one of the main admins against it is now banned.

The underlying thought behind both this and the previous post seems to be the notion that counterfactuals are somehow mysterious or hard to grasp. This looks like a good chance to plug our upcoming ICML paper, w

hich reduces counterfactuals to a programming language feature. It gives a new meaning to "programming Omega." http://www.zenna.org/publications/causal.pdf

It's a small upfront cost for gradual long-term benefit. Nothing in that says one necessarily outweighs the other. I don't think there's anything more to be had from this example beyond "hyperbolic discounting."

I think it's simpler than this: renaming it is a small upfront cost for gradual long-term benefit. Hyperbolic discounting kicks in. Carmack talks about this in his QuakeCon 2013, saying "humans are bad at integrating small costs over time": https://www.youtube.com/watch?v=1PhArSujR_A

 

But, bigger picture, code quality is not about things like local variable naming. This is Mistake #4 of the 7 Mistakes that Cause Fragile Code: https://jameskoppelcoaching.com/wp-content/uploads/2018/05/7mistakes-2ndedition.pdf

2Adam Zerner3y
Yes, but at some point the cost starts to outweigh the benefit. Eg. going from yyyymmdd to currentDate is worthwhile, but going from currentDate to betterName, or from betterName to evenBetterName might not be worthwhile. And so I think you do end up having to ask yourself the question instead of assuming that all code quality improvements are worthwhile. Although I also think there's wisdom in using heuristics rather than evaluating whether each and every case is worthwhile. I agree with the big picture point that things that are sort of siloed off aren't as important for code quality. I chose this example because I thought it would be easiest to discuss. However, although I don't think they are as important, or even frequently important, I do think that stuff like local variable names end up often being important. I'm not sure what the right adjective is here, but I guess I can say I find it to be important enough where it's worth paying attention to.

I read/listened to  Lean Startup back in 2014. Reading it helped me realize many of the mistakes I had made in my previous startup, mistakes I made even though I thought I understood the "Lean startup" philosophy by osmosis.
 

Indeed, "Lean Startup" is a movement whose terminology has spread much faster than its content, creating a poisoned well that inoculates people against learning 

For example, the term "minimum viable product" has been mutated to have a meaning emphasizing the "minimum" over the "product," making it harder to spread the ac... (read more)

Load More