Benedict Evans:Ways to think about AGI思考 AGI 的方法:

​Benedict Evans本文发布于2024 年 5 月 4 日


How do we think about a fundamentally unknown and unknowable risk, when the experts agree only that they have no idea?
当专家们一致认为他们一无所知时,我们如何看待根本上未知和不可知的风险?

The manuscript for ‘A Logic Named Joe’
《乔的逻辑》手稿

In 1946, my grandfather, writing as ‘Murray Leinster’, published a science fiction story called ‘A Logic Named Joe’. Everyone has a computer (a ‘logic’) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, ‘Joe’, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues - ‘Check your censorship circuits!’ - until they work out what to unplug. (My other grandfather, meanwhile, was using computers to spy on the Germans, and then the Russians.)
1946 年,我的祖父以“Murray Leinster”的名义出版了一部科幻小说,名为《名为乔的逻辑》。每个人都有一台连接到全球网络的计算机(一个“逻辑”),可以执行从银行业务到报纸和视频通话的所有操作。有一天,这些逻辑之一“乔”开始对网络上任何地方的任何请求提供有用的答案:例如,发明一种无法检测到的毒药,或者提出抢劫银行的最佳方法。恐慌随之而来——“检查你的审查电路!”——直到他们弄清楚要拔掉什么。 (与此同时,我的另一位祖父正在使用计算机监视德国人,然后是俄罗斯人。)

For as long as we’ve thought about computers, we’ve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of ‘artificial intelligence’, and wondered what that would mean, and indeed, what we’re trying to say with the word ‘intelligence’. There’s an old joke that ‘AI’ is whatever doesn’t work yet, because once it works, people say ‘that’s not AI - it’s just software’. Calculators do super-human maths, and databases have super-human memory, but they can’t do anything else, and they don’t understand what they’re doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are ‘super-human’ but they’re just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as ‘general intelligence’ and hence making it would be ‘artificial general intelligence’ - AGI.
自从我们思考计算机以来,我们就想知道它们是否能够从纯粹的机器、打孔卡和数据库跳跃到某种“人工智能”,并想知道这意味着什么,事实上, ,我们想用“智能”这个词来表达什么。有一个老笑话说,“人工智能”是指还没有发挥作用的东西,因为一旦它发挥作用,人们就会说“这不是人工智能——这只是软件”。计算器具有超人的数学能力,数据库具有超人的记忆力,但它们不能做任何其他事情,而且它们不明白自己在做什么,就像洗碗机了解盘子或钻头了解孔一样。钻机只是一台机器,数据库是“超人”,但它们只是软件。不知何故,人们有一些不同的东西,在某种程度上,狗、黑猩猩、章鱼和许多其他生物也是如此。人工智能研究人员开始将其称为“通用智能”,因此将其称为“通用人工智能”——AGI。

If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more.
如果我们真的能够在软件中创造出与人类智能同等的东西,那么显然这将是一件非常大的事情。我们能开发出能够推理、计划和理解的软件吗?至少,这将是我们可以实现自动化的巨大变化,正如我的祖父和其他一千位科幻小说作家所指出的那样,这可能意味着更多。

Every few decades since 1946, there’s been a wave of excitement that sometime like this might be close, each time followed by disappointment and an ‘AI Winter’, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in “from three to eight years we will have a machine with the general intelligence of an average human being”, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didn’t work).
自 1946 年以来,每隔几十年,就会出现一波兴奋的浪潮,有时这样的时刻可能即将到来,但每次随之而来的是失望和“人工智能冬天”,因为当时的技术进展速度放缓,我们意识到我们需要一个未知的数字未知的进一步突破。 1970 年,人工智能先驱马文·明斯基 (Marvin Minsky) 声称,“三到八年内,我们将拥有一台具有普通人类一般智能的机器”,但每次我们认为我们有一种方法可以实现这一目标时,结果却是:它只是更多的软件(或者只是不起作用)。

As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer. At the extreme, the so-called ‘doomers’ argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (‘This is very dangerous and we are building it as fast as possible, but don’t let anyone else do it’), but plenty of it is sincere.  
众所周知,18个月前兴起的大型语言模型(LLMs)又掀起了另一波这样的浪潮。严肃的人工智能科学家以前认为通用人工智能可能还需要几十年的时间,现在他们认为它可能更近了。在极端情况下,所谓的“末日论者”认为,当前的研究确实存在自发出现通用人工智能的风险,这可能对人类构成威胁,并呼吁政府采取紧急行动。其中一些来自自私的公司寻求竞争壁垒(“这是非常危险的,我们正在尽快建造它,但不要让其他人这样做”),但很多都是真诚的。

(I should point out, incidentally, that the doomers’ ‘existential risk’ concern that an AGI might want to and be able to destroy or control humanity, or treat us as pets, is quite independent of more quotidian concerns about, for example, how governments will use AI for face recognition, or talking about AI bias, or AI deepfakes, and all the other ways that people will abuse AI or just screw up with it, just as they have with every other technology.)
(顺便说一句,我应该指出,末日论者的“存在风险”担心 AGI 可能想要并且能够摧毁或控制人类,或者将我们视为宠物,这与更常见的担忧完全无关,例如,政府将如何使用人工智能进行人脸识别,或谈论人工智能偏见,或人工智能深度假货,以及人们滥用人工智能或搞砸人工智能的所有其他方式,就像他们对待其他所有技术一样。)

However, for every expert that thinks that AGI might now be close, there’s another who doesn’t. There are some who think LLMs might scale all the way to AGI, and others who think, again, that we still need an unknown number of unknown further breakthroughs.
然而,对于每一位认为通用人工智能现在可能已经接近实现的专家来说,还有另一位专家不这么认为。有些人认为LLMs可能会一路扩展到通用人工智能,而另一些人则再次认为我们仍然需要未知数量的未知进一步突破。

More importantly, they would all agree that they don’t actually know. This is why I used terms like ‘might’ or ‘may’ - our first stop is an appeal to authority (often considered a logical fallacy, for what that’s worth), but the authorities tell us that they don’t know, and don’t agree.
更重要的是,他们都会同意他们实际上并不知道。这就是为什么我使用“可能”或“可能”等术语 - 我们的第一站是诉诸权威(通常被认为是逻辑谬误,因为它的价值),但权威告诉我们他们不知道,也不知道不同意。

They don’t know, either way, because we don’t have a coherent theoretical model of what general intelligence really is, nor why people seem to be better at it than dogs, nor how exactly people or dogs are different to crows or indeed octopuses. Equally, we don’t know why LLMs seem to work so well, and we don’t know how much they can improve. We know, at a basic and mechanical level, about neurons and tokens, but we don’t know why they work. We have many theories for parts of these, but we don’t know the system. Absent an appeal to religion, we don’t know of any reason why AGI cannot be created (it doesn’t appear to violate any law of physics), but we don’t know how to create it or what it is, except as a concept.
不管怎样,他们不知道,因为我们没有一个关于一般智力到底是什么的连贯的理论模型,也不知道为什么人似乎比狗更擅长,也不知道人或狗与乌鸦到底有什么不同。章鱼。同样,我们不知道为什么 LLMs 看起来效果这么好,也不知道它们可以改进多少。我们在基本和机械层面上了解神经元和令牌,但我们不知道它们为何起作用。对于其中的某些部分,我们有很多理论,但我们不了解这个系统。如果没有宗教诉求,我们不知道 AGI 不能被创建的任何原因(它似乎不违反任何物理定律),但我们不知道如何创建它或它是什么,除了一个概念。

And so, some experts look at the dramatic progress of LLMs and say ‘perhaps!’ and other say ‘perhaps, but probably not!’, and this is fundamentally an intuitive and instinctive assessment, not a scientific one.
因此,一些专家看到LLMs的巨大进展并说“也许!”而其他专家则说“也许,但可能不是!”,这从根本上来说是一种直观和本能的评估,而不是科学的评估。

Indeed, ‘AGI’ itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.
事实上,“AGI”本身就是一个思想实验,或者,有人可能认为,它是一个占位符。因此,我们必须小心循环定义,以及将某物定义为存在、确定性或不可避免。

If we start by defining AGI as something that is in effect a new life form, equal to people in ‘every’ way (barring some sense of physical form), even down to concepts like ‘awareness’, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, you’ve just begged the question.
如果我们首先将 AGI 定义为实际上是一种新的生命形式,在“各个”方面与人平等(除了某种物理形式),甚至包括“意识”、情感和权利等概念,然后假设如果能够访问更多计算,它会更加智能(并且地球上甚至还有更多可用的备用计算),并且假设它可以立即突破任何控制,那么这听起来很危险,但实际上,你'我只是提出这个问题。

As Anselm demonstrated, if you define God as something that exists, then you’ve proved that God exists, but you won’t persuade anyone. Indeed, a lot of AGI conversations sound like the attempts by some theologians and philosophers of the past to deduce the nature of god by reasoning from first principles. The internal logic of your argument might be very strong (it took centuries for philosophers to work out why Anselm’s proof was invalid) but you cannot create knowledge like that.
正如安瑟姆所证明的,如果你将上帝定义为存在的东西,那么你就证明了上帝存在,但你无法说服任何人。事实上,很多 AGI 对话听起来就像过去一些神学家和哲学家试图通过第一原理推理来推断上帝的本质。你的论点的内部逻辑可能非常强大(哲学家花了几个世纪才弄清楚为什么安瑟姆的证明无效),但你不能像那样创造知识。

Equally, you can survey lots of AI scientists about how uncertain they feel, and produce a statistically accurate average of the result, but that doesn’t of itself create certainty, any more than surveying a statistically accurate sample of theologians would produce certainty as to the nature of god, or, perhaps, bundling enough sub-prime mortgages together can produce AAA bonds, another attempt to produce certainty by averaging uncertainty. One of the most basic fallacies in predicting tech is to say ‘people were wrong about X in the past so they must be wrong about Y now’, and the fact that leading AI scientists were wrong before absolutely does not tell us they’re wrong now, but it does tell us to hesitate. They can all be wrong at the same time.
同样,你可以调查大量人工智能科学家,了解他们的不确定性,并得出统计上准确的结果平均值,但这本身并不能产生确定性,就像调查统计上准确的神学家样本不会产生确定性一样上帝的本质,或者也许,将足够多的次级抵押贷款捆绑在一起可以产生 AAA 债券,这是通过平均不确定性来产生确定性的另一种尝试。预测技术的最基本的谬误之一是说“人们过去对 X 的看法是错误的,所以他们现在对 Y 的看法一定是错误的”,而领先的人工智能科学家以前错了这一事实绝对不能告诉我们他们错了现在,但它确实告诉我们要犹豫。他们可能同时都错了。

Meanwhile, how do you know that’s what general intelligence would be like? Isaiah Berlin once suggested that even presuming there is in principle a purpose to the universe, and that it is in principle discoverable, there’s no a priori reason why it must be interesting. ‘God’ might be real, and boring, and not care about us, and we don’t know what kind of AGI we would get. It might scale to 100x more intelligent than a person, or it might be much faster but no more intelligent (is intelligence ‘just’ about speed?). We might produce general intelligence that’s hugely useful but no more clever than a dog, which, after all, does have general intelligence, and, like databases or calculators, a super-human ability (scent). We don’t know. 
与此同时,你怎么知道这就是一般智力的样子?以赛亚·柏林曾经提出,即使假设宇宙原则上有一个目的,并且原则上它是可发现的,也没有先验的理由说明它一定是有趣的。 “上帝”可能是真实的,而且很无聊,并不关心我们,而且我们不知道我们会得到什么样的通用人工智能。它的智能可能比人高 100 倍,或者可能速度更快,但并没有变得更智能(智能“仅仅”与速度有关吗?)。我们可能会产生非常有用的通用智能,但并不比狗聪明,毕竟狗确实具有通用智能,并且像数据库或计算器一样,具有超人的能力(气味)。我们不知道。

Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about ‘general intelligence’ as something that will arrive in stages, with different modalities a little at at a time. Maybe people will point at the ‘general intelligence’ of Llama 6 or ChatGPT 7 and say “That’s not AGI, it’s just software!” We created the term AGI because AI came just to mean software, and perhaps ‘AGI’ will be the same, and we’'ll need to invent another term.
更进一步,当我听马克·扎克伯格谈论 Llama 3 时,我惊讶地发现他所说的“通用智能”将分阶段出现,每次会以不同的方式出现。也许人们会指着 Llama 6 或 ChatGPT 7 的“通用智能”说“这不是 AGI,这只是软件!”我们创造了“AGI”这个术语,因为人工智能只是意味着软件,也许“AGI”也是一样的,我们需要发明另一个术语。

This fundamental uncertainty, even at the level of what we’re talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isn’t fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we don’t know. And while a giant meteorite hitting the earth could only be bad, software and automation are tools, and over the last 200 years automation has sometimes been bad for humanity, but mostly it’s been a very good thing that we should want much more of.
这种根本性的不确定性,即使是在我们正在谈论的层面上,也许就是为什么所有关于通用人工智能的讨论似乎都转向了类比。如果你可以将其与核裂变进行比较,那么你就知道会发生什么,并且知道该怎么做。但这不是裂变,也不是生物武器,也不是陨石。这是一种软件,它可能会或可能不会变成通用人工智能,它可能有也可能没有某些特征,其中一些可能是不好的,而我们不知道。虽然巨大的陨石撞击地球只会带来坏事,但软件和自动化都是工具,在过去 200 年里,自动化有时对人类来说是坏事,但大多数情况下,它是一件非常好的事情,我们应该想要更多。

Hence, I’ve already used theology as an analogy, but my preferred analogy is the Apollo Program. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didn’t explode, and how to model the pressures in the combustion chamber, and what would happen if we made them 25% bigger. We knew why they went up, and how far they needed to go. You could have given the specifications for the Saturn rocket to Isaac Newton and he could have done the maths, at least in principle: this much weight, this much thrust, this much fuel… will it get there? We have no equivalents here. We don’t know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe, yes!
因此,我已经用神学作为类比,但我更喜欢的类比是阿波罗计划。我们有重力理论和火箭工程理论。我们知道为什么火箭不会爆炸,如何对燃烧室中的压力进行建模,以及如果我们将它们增大 25% 会发生什么。我们知道他们为什么上升,以及他们需要走多远。你可以把土星火箭的规格交给艾萨克·牛顿,他可以做数学计算,至少在原则上:这么大的重量,这么大的推力,这么多的燃料……它能到达那里吗?我们这里没有类似的东西。我们不知道为什么 LLMs 有效,它们能达到多大,或者它们必须走多远。然而,我们不断地把它们做得更大,而且它们似乎确实越来越接近了。他们会到达那里吗?也许是吧!

On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (there’s an old English joke about a Frenchman who says ‘that’s all very well in practice, but does it work in theory’). Yet while we can, empirically, see the rocket going up, we don’t know how far away the moon is. We can’t plot people and ChatGPT on a chart and draw a line to say when one will reach the other, even just extrapolating the current rate of growth. 
在这个主题上,有些人认为我们正处于人工智能或通用人工智能的经验阶段:我们正在构建事物并进行观察,但不知道它们为什么起作用,而理论可以稍后出现,就像伽利略出现在牛顿之前一样(有一个古老的理论)一个关于一个法国人的英语笑话,他说“这在实践中一切都很好,但在理论上可行”)。然而,虽然我们可以凭经验看到火箭上升,但我们不知道月球距离有多远。我们无法将人和 ChatGPT 绘制在图表上,并画一条线来说明一个人何时会到达另一个人,即使只是推断当前的增长率。

All analogies have flaws, and the flaw in my analogy, of course, is that if the Apollo program went wrong the downside was not, even theoretically, the end of humanity. A little before my grandfather, here’s another magazine writer on unknown risks:
所有类比都有缺陷,当然,我的类比中的缺陷是,如果阿波罗计划出错,即使在理论上,其负面影响也不是人类的终结。在我祖父之前,这是另一位关于未知风险的杂志作家:


What then, is your preferred attitude to risks that are real but unknown?? Which thought experiment do you prefer? We can return to half-forgotten undergraduate philosophy (Pascals’s Wager! Anselm’s Proof!), but if you can’t know, do you worry, or shrug? How do we think about other risks? Meteorites are a poor analogy for AGI because we know they’re real, we know they could destroy mankind, and they have no benefits at all (unless they’re very very small). And yet, we’re not really looking for them.
那么,对于真实但未知的风险,您的首选态度是什么?你更喜欢哪个思想实验?我们可以回到半被遗忘的本科生哲学(帕斯卡的赌注!安瑟姆的证明!),但如果你不知道,你会担心,还是耸耸肩?我们如何看待其他风险?陨石对于通用人工智能来说是一个糟糕的类比,因为我们知道它们是真实的,我们知道它们可以毁灭人类,而且它们根本没有任何好处(除非它们非常非常小)。然而,我们并不是真的在寻找它们。

Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last 12 months and can’t meet demand), but on a decade’s view the models will get more efficient and the chips will be everywhere. In the end, you can’t ban mathematics. On a scale of decades, it will happen anyway. If you must use analogies to nuclear fission, imagine if we discovered a way that anyone could build a bomb in their garage with household materials - good luck preventing that. (A doomer might respond that this answers the Fermi paradox: at a certain point every civilisation creates AGI and it turns them into paperclips.)
不过,假设你认为厄运者是对的:你能做什么?该技术原则上是公开的。开源模型正在激增。目前,LLMs需要大量昂贵的芯片(Nvidia 在过去 12 个月销售了 475 亿美元,无法满足需求),但从十年的角度来看,模型将变得更加高效,芯片将变得更加高效。到处。最后,你不能禁止数学。从几十年的范围来看,它无论如何都会发生。如果你必须用核裂变来类比,想象一下,如果我们发现了一种方法,任何人都可以用家用材料在车库里制造炸弹——祝你好运,避免这种情况发生。 (末日论者可能会回应说,这回答了费米悖论:在某个时刻,每个文明都创造了通用人工智能,并将它们变成了回形针。)

By default, though, this will follow all the other waves of AI, and become ‘just’ more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UK’s Post Office scandal reminds us that you don’t need AGI for software to ruin people’s lives. LLMs will produce more pain and more scandals, but life will go on. At least, that’s the answer I prefer myself.
不过,默认情况下,这将跟随所有其他人工智能浪潮,并“只是”更多的软件和更多的自动化。自动化总是会产生摩擦性的痛苦,回到勒德派,英国邮局丑闻提醒我们,你不需要通用人工智能来毁掉人们的生活。 LLMs会产生更多的痛苦和更多的丑闻,但生活还要继续。至少,这是我自己更喜欢的答案。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.hqwc.cn/news/705551.html

如若内容造成侵权/违法违规/事实不符,请联系编程知识网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

JVM学习-虚拟机栈

虚拟机栈 每个线程创建时都会创建一个虚拟机栈,其内部保存一个个栈帧,对应一次次Java方法调用,栈是线程私有的。 生命周期: 与线程相同 作用 主管Java程序的运行,它保存方法的局部变量、部分结果、并参与方法的调用和返回。 …

【管理咨询宝藏104】普华永道财务管理与内控培训

本报告首发于公号“管理咨询宝藏”,如需阅读完整版报告内容,请查阅公号“管理咨询宝藏”。 【管理咨询宝藏104】普华永道财务管理与内控培训 【格式】PDF版本 【关键词】普华永道、四大、财务管理 【核心观点】 - 职能转变后,财务在决策支持…

亚马逊跨境电商平台优势凸显,武汉星起航解析助力卖家把握商机

在全球电商市场的激烈竞争中,亚马逊凭借其独特的优势和卓越的运营能力,成为众多卖家首选的跨境电商平台。武汉星起航作为深耕亚马逊跨境电商领域的领军企业,对亚马逊平台的优势有着深刻的理解和独到的见解。本文将重点探讨亚马逊跨境电商平台…

eMMC和SD模式速率介绍

概述 在实际项目开发中我们常见的问题是有人会问,“当前项目eMMC、SD所使用模式是什么? 速率是多少?”。这些和eMMC、SD的协议中要求的,要符合协议。接下来整理几张图来介绍。 eMMC 模式介绍 一般情况下我们项目中都是会支持到H…

基于SpringBoot设计模式之创建型设计模式·工厂方法模式

文章目录 介绍开始架构图样例一定义工厂定义具体工厂(上衣、下装)定义产品定义具体生产产品(上衣、下装) 测试样例 总结优点缺点与抽象工厂不同点 介绍 在 Factory Method模式中,父类决定实例的生成方式,但…

牛客NC404 最接近的K个元素【中等 二分查找+双指针 Java/Go/PHP】

题目 题目链接: https://www.nowcoder.com/practice/b4d7edc45759453e9bc8ab71f0888e0f 知识点 二分查找;找到第一个大于等于x的数的位置idx;然后从idx开始往两边扩展Java代码 import java.util.*;public class Solution {/*** 代码中的类名、方法名、…

UnitTest / pytest 框架

文章目录 一、UnitTest框架1. TestCase使用2. TestSuite 和 TestRunner3. TestLoader4. Fixture装置5. UnitTest断言1. 登录案例 6. 参数化1. parameterized插件 7. unitTest 跳过 二、pytest 框架1. 运行方式3.读取配置文件(常用方式) 2. pytest执行用例的顺序1. 分组执行(冒烟…

ArcGIS10.X入门实战视频教程(arcgis入门到精通)

点击学习: ArcGIS10.X入门实战视频教程(GIS思维)https://edu.csdn.net/course/detail/4046?utm_sourceblog2edu 点击学习: ArcGIS10.X入门实战视频教程(GIS思维)https://edu.csdn.net/course/detail/404…

【Python从入门到进阶】54、使用Python轻松操作SQLite数据库

一、引言 1、什么是SQLite SQLite的起源可以追溯到2000年,由D. Richard Hipp(理查德希普)所创建。作为一个独立的开发者,Hipp在寻找一个能够在嵌入式系统中使用的轻量级数据库时,发现现有的解决方案要么过于庞大&…

CAD插入文字到另一图形样式变相同

CAD从一张图形复制到另外一张图形后,文字样式变成一样是因为两张图所用的文字样式名称一样,但是样式里面的使用字体样式不一样。如下图所示,找到工具栏中的注释 ,点击文字样式。里面就会显示当前图形中使用的样式名称及其对应的字…

C++map容器关联式容器

Cmap 1. 关联式容器 vector、list、deque、forward_list(C11)等STL容器,其底层为线性序列的数据结构,里面存储的是元素本身,这样的容器被统称为序列式容器。而map、set是一种关联式容器,关联式容器也是用来存储数据的&#xff0…

【计算机毕业设计】springboot反诈科普平台的设计与实现

相比于以前的传统手工管理方式,智能化的管理方式可以大幅降低反诈科普平台的运营人员成本,实现了反诈科普平台的 标准化、制度化、程序化的管理,有效地防止了反诈科普平台的随意管理,提高了信息的处理速度和精确度,能够…