丹尼尔·吉尔伯特论不可撤消之承诺的益处
我以前认为做出决断之后还能改变主意是有好处的,在2002年,我改变了这一想法。
我和简·艾伯特(Jane Ebert)发现人们不能撤消他们的决定时通常会更快乐。实验中参与者能撤消他们的决定时,他们就会思考自己决定的利弊。而不能撤消决定时,他们就会将注意力集中于这个决定有利的一面,而忽视不利的一面。同样,决定不可撤消比可以更改会令他们更满足。讽刺的是,参与者并未意识到这一点,他们都强烈倾向于希望可以改变决定。
现在,直至这一刻我依然相信婚姻源自于爱。但是这些实验启示我,原来婚姻也可能引起爱。如果你认真地看待这些研究资料,你就要去实践它。所以我知道这个结果的时候,我就向和我同居的女友求婚了。她答应了我。事实证明我做的决定是正确的:当她变成我的老婆之后,我更爱她了。
–《社会心理学》
人们往往忽视了自己心理免疫系统的速度和力量,包括合理化策略、看淡、原谅和限制情绪创伤。在很大程度上,被我们忽视的心理免疫系统[吉尔伯特和威尔逊称之为免疫忽视现象(immune neglect)]让我们比预期更容易适应诸如残疾、恋人分手、考试不及格、丢掉工作以及个人与团队的失败等挫折。令人惊讶的是,吉尔伯特与其同事报告(2004),相比轻微的愤怒(不能激活我们的防御机制),重大的消极事件(可以激活我们的心理防御机制)所引发的痛苦持续的时间反而更短。换句话说,我们是有恢复力的。
–《社会心理学》
我们的直觉理论似乎是:我们想要,我们得到,我们快乐。如果这是事实,这一章的字数就会少很多。实际上,吉尔伯特和威尔逊(Gilbert&Wilson,2000)指出,我们常常“错误地想要得到某些东西”。人们常常想拥有一个有阳光、海浪和沙滩的田园荒岛假期,但当他们一旦发现“自己多么需要平凡生活、智力刺激和可口零食”时,可能会颇为失望。我们通常会认为如果我们的候选人或小组赢得胜利,那我们会高兴很久。但多个研究显示我们易受影响偏差(impact bias)的影响——高估情绪事件的持久性影响。这些好消息带来的情绪痕迹消失得比自己预期的要快得多。
–《社会心理学》
- 我们很在意自己给别人留下了什么印象,我们倾向于认为别人给予我们的关注比实际要多(焦点效应)。
- 我们也倾向于认为我们的情绪总是表现得比实际情况更明显(透明度错觉)。
–《社会心理学》
一个好的理论应该具有以下特征: ▪ 能有效概括大量的观察结果; ▪ 能做出清晰的预测,以便于我们: ▪ 确证或修正理论; ▪ 激发新的探索; ▪ 指出可能的应用方向。
起床了,没啥事,要不去公司吧。
有点期待 Lex Fridman 采访 Donald Trump 的节目了。
聪明工程师最常见的错误是优化不应该存在的东西。
尝试了一下,我单杠果然吊不到 100 秒,也就 70 秒左右。
最近都在用自家的 read it later 产品了。感觉有无数新想法想往里加,还是得控制一下,先尽可能不加,稳定可靠先。
https://mp.weixin.qq.com/s/Gd-QWHQrIfuuz5R0WqA0Ug,Duolingo 如何重新点燃用户增长,虽然是去年的文章,但是仔细看了想了,应该对我们会有一定的帮助。现在这个大环境下,还是精细化点好,能活长久些。
起床一会了,打算把 Lenny 二月份写过那篇多邻国的文章,挑重点翻成中文发公众号。
这次看《刀锋》,比之前看有感觉。似乎看到了各种人生。或许值得把中学以来看过的文学名著重看一遍。
当然,也可能是和 AI 结对阅读,给我打来了更深的阅读感受。
回归规律的生活,多陪家人,早睡早起,锻炼身体,健康饮食,多思考多实践,少社交。
其实我觉得网约车最好的地方就是解压。我不需要上车后跟司机说去哪儿,甚至可以设置了上车别理我,到了别给我打电话。一切都很符合社恐人士。
我有个习惯,应该是一种强迫症:出门旅行之前,会尽量把所有电子设备充电充到 100%,哪怕是 98% 我都会有些不自在。估计有不少朋友跟我一样吧。
胡适的治学方法三讲,讲得也很好,哪怕放到现在,也挺实用的呢。
I think of Perplexity as a knowledge discovery engine, neither a search engine. Of course, we call it an answer engine, but everything matters here. The journey doesn’t end once you get an answer. In my opinion, the journey begins after you get an answer. You see related questions at the bottom, suggested questions to ask. Why? Because maybe the answer was not good enough, or the answer was good enough, but you probably want to dig deeper and ask more.
- Aravind Srinivas
几个简单的原则造就了维基百科,这是原则之一:
We’re always making sure we can cite what it says, what we write, every sentence. Now, what if we ask the chatbot to do that? Then we realized, that’s literally how Wikipedia works.
In Wikipedia, if you do a random edit, people expect you to actually have a source for that, and not just any random source. They expect you to make sure that the source is notable. There are so many standards for what counts as notable and not. He decided this is worth working on.
The first employee we hired came and asked us about health insurance. Normal need, I didn’t care. I was like, “Why do I need a health insurance? If this company dies, who cares?” My other two co-founders were married, so they had health insurance to their spouses, but this guy was looking for health insurance, and I didn’t even know anything.
Who are the providers? What is co-insurance, a deductible? None of these made any sense to me. You go to Google. Insurance is a category where, a major ad spend category. Even if you ask for something, Google has no incentive to give you clear answers. They want you to click on all these links and read for yourself, because all these insurance providers are bidding to get your attention.
We integrated a Slack bot that just pings GPT 3.5 and answered a question. Now, sounds like problem solved, except we didn’t even know whether what it said was correct or not. In fact, it was saying incorrect things. We were like, “Okay, how do we address this problem?” We remembered our academic roots. Dennis and myself were both academics. Dennis is my co-founder. We said, “Okay, what is one way we stop ourselves from saying nonsense in a peer reviewed paper?”
We’re always making sure we can cite what it says, what we write, every sentence. Now, what if we ask the chatbot to do that? Then we realized, that’s literally how Wikipedia works. In Wikipedia, if you do a random edit, people expect you to actually have a source for that, and not just any random source. They expect you to make sure that the source is notable. There are so many standards for what counts as notable and not. He decided this is worth working on.
It’s not just a problem that will be solved by a smarter model. There’s so many other things to do on the search layer, and the sources layer, and making sure how well the answer is formatted and presented to the user. That’s why the product exists.
- Aravind Srinivas
这似乎就是 pplx 的起源了。