Is this like "have the hackathon participants do manual neural architecture search and train with L1 loss"?
Ah, I misinterpreted your question. I thought you were looking for ideas for your team that was participating in the hackation, not as the organizer of the hackation.
In my experience, most hackathons are judged qualitatively, so I wouldn't worry about ideas (mine or others') without a strong metric
Do a literature survey for the latest techniques on detecting if a image/prose text/piece of code is computer-generated or human-generated. Apply it to a new medium (i.e. if it's an article about text, borrow techniques to apply it to images, or vice-versa).
Alternatively, take the opposite approach and show AI safety risks. Can you train a system that looks very accurate, but gives incorrect output on specific examples that you choose during training? Just as one idea, some companies use face recognition as a key part of their security system. ...
>75% confidence: No consistent strong play in simple game of imperfect information (e.g. battleship) for which it has not been specifically trained.
>50% confidence: No consistent "correct" play in a simple game of imperfect information (e.g. battleship) for which it has not been specifically train. Correct here means making only valid moves, and no useless moves. For example, in battleship a useless move would be attacking the same grid coordinate twice.
>60% confidence: Bad long-term sequence memory, particularly when combined with non-memorization tasks. For example, suppose A=1, B=2, etc. What is the sum of the characters in a given page of text (~500 words)?
Above 99% certainty:
Run inference in reasonable latency (e.g. < 1 second for text completion) on a typical home gaming computer (i.e. one with a single high-powered GPU).
Didn't this basically happen with LTCM? They had losses of $4B on $5B in assets and a borrow of $120B. The US government had to force coordination of the major banks to avoid blowing up the financial markets, but meltdown was avoided.
Edit: Don't pyramid schemes do this all the time, unintentionally? Like, Madoff basically did this and then suddenly (unintentionally) defaulted.
But if I had to use the billion dollars on evil AI specifically, I'd use the billion dollars to start an AI-powered hedge fund and then deliberately engineer a global liquidity crisis.
How exactly would you do this? Lots of places market "AI powered" hedge funds, but (as someone in the finance industry) I haven't heard much about AI beyond things like regularized regression actually giving significant benefit.
Even if you eventually grew your assets to $10B, how would you engineer a global liquidity crisis?
+1, CLion is vastly superior to VsCode or emacs/vi for capabilities and ease of setup, particularly for C++ and Rust
It seems like this is a single building version of a gated community / suburb? In "idealized" America (where by idealized I mean somewhat affluent, morally homogeneous within the neighborhood, reasonably safe, etc), all the stuff you're describing already happens. Transportation for kids is provided by carpools or by the school, kids wander from house to house for meals and play, etc. Families get referrals for help (housekeeping, etc) from other families, or because there are a limited number of service providers in the area. In general, these aren't the ...
I feel like I have all the things you state are required to have a huge edge, and yet...my edge is not obvious to me. Most of the money-making opportunities in DeFi seem to involve at least one of:
Yield farming does look attractive, and I plan to invest some stablecoins in the near future.
I guess I expect the edge to manifest in the form of being able to look at simple, high-upside, good ideas being implemented by smart people, correctly distinguish them from uninspired hacks, and then being able to take effective action on your beliefs. (Good ideas like Bitcoin or Ethereum themselves, or like CFMMs.)
As a random example, if you go look at harvest.finance right now, you can see that there's a huge APY on investing in Uniswap liquidity pools associated with tokenized versions of public company stock, like AAPL. These tokens are designed by a ...
Despite being a webcomic, I think this is a funny, legitimate, and scathing critique of the philosophic life and to some extent the philosophy of rationality
I don't have an answer for the actual question you're asking (baseline side effects), however I would like to offer my experiences with nootropics. A number of years ago, I went through a phase where I tried a large variety of nootropics, including doing some basic psychometric tests on a daily basis (Stroop test, dual n-back, etc).
It's remarkably hard to find a test that measures cognitive ability and is immune to practice effects, but I figured some testing was better than just subjective assessments of how I felt.
In all my testing, I only found a ...
I think I mis-pasted the link. I have edited it, but it's suppose to go to https://www.aqr.com/Insights/Perspectives/A-Gut-Punch
I do agree that it increases the variance of outcomes. I think it decreases the mean, but I'm less sure about that. Here's one way I think it could work, if it does work: If some people are generally pessimistic about their chances of success, and this causes them to update their beliefs closer to reality, then Altman's advice would help. That is, if some people give up too easily, it will help them, while the outside world (investors, the market, etc) will put a check on those who are overly optimistic. However, I think it's still important to note that "...
Almost always, the people who say “I am going to keep going until this works, and no matter what the challenges are I’m going to figure them out”, and mean it, go on to succeed. They are persistent long enough to give themselves a chance for luck to go their way.
I've seen this quote (and similar ones) before. I believe that this approach is extremely flawed, to the point of being anti-rationalist. In no particular order, my objections are:
I don't think founder/investor class conflict makes that much sense as an explanation for that. It's easy to imagine a world in which investors wanted their money returned when the team updates downwards on their likelihood of success. (In fact, that sometimes happens! I don't know whether Sam would do that but my guess is only if the founders want to give up.)
I also don't think at least Sam glorifies pivots or ignores opportunity cost. For instance the first lecture from his startup course:
And pivots are supposed to be great, the more pivots the better. S
The Moneyball story would be a good example of this. Essentially all of sports dismissed the quantitative approach until the A's started winning with it in 2002. Now quantitative management has spread to other sports like basketball, soccer, etc.
You could make a similar case for quantitative asset management. Pairs trading, one of the most basic kinds of quantitative trading, was discovered in the early 1980s (claims differ whether it was Paul Wilmott, Bamberger & Tartaglia at Morgan Stanley, or someone else). While the computation power to make ...
Yeah, someone else suggested a novel nootropic drug as one answer - online education is basically an alternative form of that drug that is easier to realize (or at least, it's hard is a very different way)
...there are somewhere between six and ten billion people. At any given time, most of them are making mud bricks or field-stripping their AK-47s. - Neal Stephenson, Snow Crash
When we think of new technologies, we typically think of expensive, high-tech innovations, like energy production, robotics, etc. I would suggest that broader adoption of existing technologies, including social technologies, would have a bigger global impact.
For example, one technology that could dramatically impact GDP is improved managerial technology. This paper describes a s...
I don't have any immediate ideas on long positions - the AI winter isn't AI failing per se, right? It's just that we stop making progress so we're stuck where we are.
Maybe something like Doordash? They filed for an IPO recently, and if you think autonomous robots aren't going to drive down the cost of logistics then last-mile logistics companies might be underpriced. I have much less confidence in this kind of trade though.
You can short some AI ETFs. https://etfdb.com/themes/artificial-intelligence-etfs/ has a list, although some of those are obviously miscategorized - check the holdings to see how much you agree that they're representative.
You're left with market risk (i.e., beta) when you do this, but if you have a diversified portfolio you're probably okay with not putting on an additional specific hedge. That is, if you're right and the whole market rallies (but your ETF rallies less), you'll be okay.
If you want to be more tactical, I would look at companies that are AI-...
A meeting quality score, as described in the patent referenced in this article (https://www.geekwire.com/2020/microsoft-patents-technology-score-meetings-using-body-language-facial-expressions-data/ )
Some additional ideas: There's a large variety of "loss functions" that are used in machine learning to score the quality of solutions. There are a lot of these, but some of the most popular are below. A good overview is at https://medium.com/udacity-pytorch-challengers/a-brief-overview-of-loss-functions-in-pytorch-c0ddb78068f7
* Mean Absolute Error (a.k.a. L1 loss)
* Mean squared error
* Negative log-likelihood
* Hinge loss
* KL divergence
* BLEU loss for machine translation (https://www.wikiwand.com/en/BLEU)
There's also a large set of "goodness of fit" m...
Microsoft TrueSkill (Multiplayer ELO-like system, https://www.wikiwand.com/en/TrueSkill)
I originally read this EA as "Evolutionary Algorithms" rather than "Effective Altruism", which made me think of this paper on degenerate solutions to evolutionary algorithms (https://arxiv.org/pdf/1803.03453v1.pdf). One amusing example is shown in a video at https://twitter.com/jeffclune/status/973605950266331138?s=20
Here are some attributes I've noticed among people who self-identify as rationalists. They are:
... (read more)
- Overwhelmingly white and male. In the in-person or videoconference meetups I've attended, I don't think I've met more than a couple non-white people, and perhaps 10% were non-male.
- Skew extremely young. I would estimate the median age is somewhere in the early to mid 20s. I don't think I've ever met a rat over the age of 50. I'm not saying that they don't exist, but they seem extremely underrepresented relative to the general population.
- Overweight the impact