zlacker

[parent] [thread] 61 comments
1. 93po+(OP)[view] [source] 2023-03-01 14:52:52
OpenAI, if successful, will likely become the most valuable company in the history of the planet, both past and future.
replies(7): >>ratorx+85 >>TeMPOr+i5 >>ejb999+o5 >>berkle+16 >>unethi+P8 >>teeker+Pa >>rvz+Gl1
2. ratorx+85[view] [source] 2023-03-01 15:23:04
>>93po+(OP)
Really? I see a lot of competition in the space. I think they’d have to become significantly more successful than their competitors (some of which are massive AI powerhouses themselves) to achieve the market dominance necessary.
replies(1): >>TeMPOr+U5
3. TeMPOr+i5[view] [source] 2023-03-01 15:24:35
>>93po+(OP)
Let's hope that they aren't there, given your nicely concealed prediction it'll be the end of the world as we know it.
4. ejb999+o5[view] [source] 2023-03-01 15:25:05
>>93po+(OP)
seriously doubt it - what they are doing, others can do - and if they start generating a lot of revenue, it will attract competition - lots of it.

They don't have a moat big enough that many millions of dollars can't defeat.

replies(2): >>hypert+x8 >>adamsm+Tj
◧◩
5. TeMPOr+U5[view] [source] [discussion] 2023-03-01 15:27:59
>>ratorx+85
I think what GP is saying is that success of OpenAI means making a lot of profit, and then triggering the AI apocalypse - which is how they become most valuable company in history both past and future.
replies(1): >>novaRo+4b
6. berkle+16[view] [source] 2023-03-01 15:28:49
>>93po+(OP)
It’s just an autocomplete engine. Someone else will achieve AGI and OpenAI falls apart very quickly when that occurs.
replies(1): >>HarHar+4P
◧◩
7. hypert+x8[view] [source] [discussion] 2023-03-01 15:44:40
>>ejb999+o5
What if they have an internal ChatGPTzero, training and reprogramming itself, iterating at inhuman speed? A headstart in an exponential is a moat.

It surely will have huge blindspots (and people do too), but perhaps it will be good enough for self-improvement... or will be soon.

replies(1): >>jasonj+lq
8. unethi+P8[view] [source] 2023-03-01 15:46:06
>>93po+(OP)
Eh.

I think that keeping it permanently closed source when it has such amazing potential for changing society is unethical bordering on evil.

However,

I think the technology and the datasets will trickle out, perhaps a few years behind, and in time we will have truly open "AI" algos on our home computers/phones.

replies(1): >>justin+Bf
9. teeker+Pa[view] [source] 2023-03-01 15:55:27
>>93po+(OP)
Really? I feel like they'll go the way of Docker, but faster: Right now super hot, nice tools/API, great PR. But it's build on open and known foundations, soon GPTs will be commodity and then something easier/better FOSS will arise. It may take some time (2-3 years?) but this scenario seems most likely to me.

Edit: Ah didn't get the "reference", perhaps indeed it will be the last of the tech companies ever indeed, at least one started by humans ;).

replies(2): >>startu+bo >>jrm4+8x
◧◩◪
10. novaRo+4b[view] [source] [discussion] 2023-03-01 15:56:46
>>TeMPOr+U5
Can you elaborate, what is "the AI apocalypse"? Is it just a symbolic metaphor or is there any scientific research behind this words? For me it's rather more unpredictable toxic environment we observe in the world currently, dominated by purely human-made destructive decisions, often based on purely animal instincts.
replies(4): >>flango+xe >>kordle+ok >>YeGobl+pq >>LordDr+Nw1
◧◩◪◨
11. flango+xe[view] [source] [discussion] 2023-03-01 16:12:11
>>novaRo+4b
Without AI control loss: Permanent dictatorship by whoever controls AI.

With AI control loss: AI converts the atoms of the observable universe to maximally achieve whatever arbitrary goal it thinks it was given.

These are natural conclusions that can be drawn from the implied abilities of a superintelligent entity.

replies(1): >>kliber+Hp
◧◩
12. justin+Bf[view] [source] [discussion] 2023-03-01 16:15:58
>>unethi+P8
So, you reckon they're effectively operating like Google then?
◧◩
13. adamsm+Tj[view] [source] [discussion] 2023-03-01 16:32:58
>>ejb999+o5
Others might be able to. It's not trivial to get the capital needed to purchase enough compute to train LLMs from scratch and it's not trivial to hire the right people who can actually make them work.
◧◩◪◨
14. kordle+ok[view] [source] [discussion] 2023-03-01 16:34:24
>>novaRo+4b
If the assertion that GPT-3 is a "stochastic parrot" is wrong, there will be an apocalypse because whoever controls an AI that can reason is going to win it all.

The opinions that it is or isn't "reasoning" are widely varied and depend heavily on interpretation of the interactions, many of which are hersey.

My own testing with OpenAI calls + using Weaviate for storing historical data of exchanges indicates that such a beast appears to have the ability to learn as it goes. I've been able to teach such a system to write valid SQL from plain text feedback and from mistakes it makes by writing errors from the database back into Weaviate (which is then used to modify the prompt next time it runs).

◧◩
15. startu+bo[view] [source] [discussion] 2023-03-01 16:48:00
>>teeker+Pa
Possible. Coding as we know it might get obsolete. And it is a trillion dollar industry.
replies(3): >>teeker+mu >>fsckbo+a81 >>mckrav+rH2
◧◩◪◨⬒
16. kliber+Hp[view] [source] [discussion] 2023-03-01 16:53:15
>>flango+xe
> whatever arbitrary goal it thinks it was given [...] abilities of a superintelligent entity

Don't these two phrases contradict each other? Why would a super-intelligent entity need to be given a goal? The old argument that we'll fill the universe with staplers pretty much assumes and requires the entity NOT to be super-intelligent. An AGI would only became one when it gets the ability to formulate its own goals, I think. Not that it helps much if that goal is somehow contrary to the existence of the human race, but if the goal is self-formulated, then there's a sliver of hope that it can be changed.

> Permanent dictatorship by whoever controls AI.

And that is honestly frightening. We know for sure that there are ways of speaking, writing, or showing things that are more persuasive than others. We've been perfecting the art of convincing others to do what we want them to do for millennia. We got quite good at it, but as with everything human, we lack rigor and consistence in that, even the best speeches are uneven in their persuasive power.

An AI trained for transforming simple prompts into a mapping of demographic to what will most likely convince that demographic to do what's prompted doesn't need to be an AGI. It doesn't even need to be much smarter than it already is. Whoever implements it first will most likely first try to convince everyone that all AI is bad (other than their own) and if they succeed, the only way to change the outcome would be a time machine or mental disorders.

(Armchair science-fiction reader here, pure speculation without any facts, in case you wondered :))

replies(1): >>flango+qv
◧◩◪
17. jasonj+lq[view] [source] [discussion] 2023-03-01 16:55:35
>>hypert+x8
It's a fundamentally different problem. AlphaZero (DeepMind) was able to be trained this way because it was setup with an explicit reward function and end condition. Competitive self-play needs a reward function.

It can't just "self-improve towards general intelligence".

What's the fitness function of intelligence?

replies(2): >>pixl97+hV >>hypert+bY
◧◩◪◨
18. YeGobl+pq[view] [source] [discussion] 2023-03-01 16:55:47
>>novaRo+4b
>> Is it just a symbolic metaphor or is there any scientific research behind this words?

Of course there is. See (Cameron, 1984) (https://en.wikipedia.org/wiki/The_Terminator)

◧◩◪
19. teeker+mu[view] [source] [discussion] 2023-03-01 17:09:51
>>startu+bo
I have ChatGPT make up some code every now and then. It’s really nice and when not obscure usually directly useable. By you need to understand what it produces imo. I love that it also explains the code and I can follow code it generates and judge its quality and applicability. Isn’t that important?
replies(1): >>Fillig+hy
◧◩◪◨⬒⬓
20. flango+qv[view] [source] [discussion] 2023-03-01 17:13:50
>>kliber+Hp
The point of intelligence is to achieve goals. I don't think Microsoft and others are pouring in billions of dollars without the expectation of telling it to do things. AI can already formulate its own sub-goals, goals that help it achieve its primary goal.

We've seen this time and time again in simple reinforcement learning systems over the past two decades. We won't be able to change the primary goal unless we build the AGI so that it permits us because a foreseeable sub-goal is self-preservation. The AGI knows if its programming is changed the primary goal won't be achieved, and thus has incentive to prevent that.

AI propaganda will be unmatched but it may not be needed for long. There are already early robots that can carry out real life physical tasks in response to a plain English command like "bring me the bag of chips from the kitchen drawer." Commands of "detain or shoot all resisting citizens" will be possible later.

◧◩
21. jrm4+8x[view] [source] [discussion] 2023-03-01 17:19:44
>>teeker+Pa
Are there any companies like Docker? They feel like such a Black Swan to me; namely -- "billions" + very useful + pretty much "not evil" at all. I literally can't think of any others.

I don't mean "literally evil" of course, I personally acknowledge the need to make money et al, but I mean it even seems like your most Stallman-esque types wouldn't have too much of a problem with Docker?

replies(3): >>throwa+VP >>ynx+fW >>goodpo+ss1
◧◩◪◨
22. Fillig+hy[view] [source] [discussion] 2023-03-01 17:24:02
>>teeker+mu
Last year the output was poor. The year before then, GPT essentially couldn't write code at all...

You're not wrong about its quality right now, but let's look at the slope as well.

replies(2): >>buddhi+AC >>jay_ky+2X
◧◩◪◨⬒
23. buddhi+AC[view] [source] [discussion] 2023-03-01 17:38:29
>>Fillig+hy
on the other hand, gpt-3 was trained on a data set that contains all of the internet already. A big Limitation seems to be that it can only work with problems that it has already seen
replies(2): >>pixl97+vR >>JohnFe+fT
◧◩
24. HarHar+4P[view] [source] [discussion] 2023-03-01 18:21:19
>>berkle+16
No, it's not just an autocomplete engine. The underlying neural network architecture is a transformer. It certainly can do "autocomplete" (or riffs on autocomplete), but it can also do a lot more. It doesn't take much thought to realize that being REALLY good at autocomplete means that you need to learn how to do a lot of other things as well.

At the end of the day the "predict next word" training goal of LLMs is the ultimate intelligence test. If you could always answer that "correctly" (i.e. intelligently) you'd be a polymath genius. Focusing on the "next word" ("autocomplete") aspect of this, and ignoring the knowledge/intelligence needed to do WELL at it is rather misleading!

"The best way to combine quantum mechanics and general relativity into a single theory of everything is ..."

replies(1): >>goatlo+mf2
◧◩◪
25. throwa+VP[view] [source] [discussion] 2023-03-01 18:24:36
>>jrm4+8x
Netflix?
replies(2): >>jrm4+C81 >>majou+gr2
◧◩◪◨⬒⬓
26. pixl97+vR[view] [source] [discussion] 2023-03-01 18:29:30
>>buddhi+AC
I mean a huge amount of code I see is stuff like "get something from API, do this, and pass to API/SQL" so I'm assuming a lot of that could be automated.
replies(1): >>ethbr0+1Z1
◧◩◪◨⬒⬓
27. JohnFe+fT[view] [source] [discussion] 2023-03-01 18:36:17
>>buddhi+AC
> on the other hand, gpt-3 was trained on a data set that contains all of the internet already

This fact has made me cease to put new things on the internet entirely. I want no part of this, and while my "contribution" is infinitesimally small, it's more than I want to contribute.

◧◩◪◨
28. pixl97+hV[view] [source] [discussion] 2023-03-01 18:45:22
>>jasonj+lq
Actually you are stating 2 different problems at the same time.

A lot of intelligence is based around "Don't die, also, have babies". Well AI doesn't have that issue as long as it produces a good enough answer we'll keep the power flowing to it.

The bigger issue, and one that is likely far more dangerous, is "Ok, what if it could learn like this". You've created super intelligence with access to huge amounts of global information and is pretty much immortal unless humans decide to pull the plug. The expected alignment of your superintelligent machine is to design a scenario where we cannot unplug the AI without suffering some loss (monetary is always a good one, rich people never like to lose money and will let an AI burn the earth first).

The alignment issue on a potential superintelligence is, I don't believe, a solvable problem. Getting people not to be betraying bastards is hard enough at standard intelligence levels, having a thing that could potentially be way smarter and connected to way more data is not going to be controllable in any form or fashion.

◧◩◪
29. ynx+fW[view] [source] [discussion] 2023-03-01 18:49:48
>>jrm4+8x
While I have no problem with Docker, it probably is worth noting that their entire product is based on a decade and a half of Google engineering invested into the Linux kernel (cgroups).

Not that that demeans or devalues Docker, but it contextualizes its existence as an offshoot of a Google project that aimed to make it possible (and did, but only internally)

replies(1): >>jankey+xg1
◧◩◪◨⬒
30. jay_ky+2X[view] [source] [discussion] 2023-03-01 18:52:26
>>Fillig+hy
The whole model is wrong. We don't need an AI that just spits out words that look like words its seen before. We need it to understand what its doing.

Midjourny needs to understand that its drawing a hand, that hands have 5 fingers.

Nobody will use Bing chat for anything important. Its like asking some some random guy on the train. He might know the answer, and if its not important then fine, but if it is important, say the success of your business, you're going to want talk to somebody who actually knows the answer.

replies(1): >>rvnx+IC1
◧◩◪◨
31. hypert+bY[view] [source] [discussion] 2023-03-01 18:57:25
>>jasonj+lq
Thanks, good point. Thinking aloud:

Can ChatGPT evaluate how good ChatGPT-generated output is? This seems prone to exaggerating blind-spots, but OTOH, creating and criticising are different skills, and criticising is usually easier.

replies(1): >>hypert+8k2
◧◩◪
32. fsckbo+a81[view] [source] [discussion] 2023-03-01 19:51:53
>>startu+bo
> Coding as we know it might get obsolete. And it is a trillion dollar industry.

freeing up that many knowledge workers to do other things will grow the economy, not shrink it, a new industrial revolution

replies(3): >>rnk+oc1 >>elzbar+3i1 >>mitthr+Tq1
◧◩◪◨
33. jrm4+C81[view] [source] [discussion] 2023-03-01 19:54:29
>>throwa+VP
Don't know why this was downvoted, I tend to agree. There's a clear relationship between the money and the product. Pay money, receive access to a bunch of shows. (as opposed to, e.g. SEO click-view advertising shadiness)
◧◩◪◨
34. rnk+oc1[view] [source] [discussion] 2023-03-01 20:12:30
>>fsckbo+a81
That's an excellent point that I don't think is made enough. The great pay and relative freedom software engineering provides the technically-minded people of the world is great for us, yet starves many other important fields from more technical innovation because of not enough workers in those fields.
replies(1): >>robert+ll1
◧◩◪◨
35. jankey+xg1[view] [source] [discussion] 2023-03-01 20:32:56
>>ynx+fW
So is OpenAI actually, GPT -> transformer, invented at Google. DALL-E -> diffusion, invented at ... Google.
replies(1): >>margor+Nx1
◧◩◪◨
36. elzbar+3i1[view] [source] [discussion] 2023-03-01 20:39:51
>>fsckbo+a81
Exactly what?
◧◩◪◨⬒
37. robert+ll1[view] [source] [discussion] 2023-03-01 20:54:18
>>rnk+oc1
Can you explain? I think I'm reading it wrong, but it seems as though you're saying the presence of something is what causes starvation of that thing.
replies(1): >>fsckbo+Cx1
38. rvz+Gl1[view] [source] 2023-03-01 20:56:01
>>93po+(OP)
Complete nonsense. Especially with the "If"

It is inevitable that the regulators will get in the way and open source alternatives will also catch up with OpenAI as they have done with GPT-3 and with DALLE-2 and even less than 2 months of ChatGPT, open source alternatives are quickly appearing everywhere.

◧◩◪◨
39. mitthr+Tq1[view] [source] [discussion] 2023-03-01 21:22:57
>>fsckbo+a81
Yes; hopefully they get good at working with their hands.
replies(1): >>Camper+xu1
◧◩◪
40. goodpo+ss1[view] [source] [discussion] 2023-03-01 21:32:10
>>jrm4+8x
> Are there any companies like Docker?

Let's hope not.

> "not evil" at all

Sarcasm?

replies(2): >>DiggyJ+AB1 >>jrm4+7f2
◧◩◪◨⬒
41. Camper+xu1[view] [source] [discussion] 2023-03-01 21:43:58
>>mitthr+Tq1
Well, yes, because then maybe I won't have to pay $250/hour to have my car fixed.
◧◩◪◨
42. LordDr+Nw1[view] [source] [discussion] 2023-03-01 21:56:18
>>novaRo+4b
https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...
◧◩◪◨⬒⬓
43. fsckbo+Cx1[view] [source] [discussion] 2023-03-01 22:00:58
>>robert+ll1
no, he's saying other areas that could use smart people are being starved of a certain type of smart, analytical people, for example the types or quantities of people who might be attracted to the medical field.

Lester Thurow, a pretty left liberal economist, pointed out that women's "liberation" and entrance into the general workforce had starved teaching/education of the pool of talented women that had previously kept the quality of education very high. (His mother had been a teacher.)

I (who had studied econ so I tend to think about it) noticed at the dawn of the dot-com boom how much of industry is completely discretionary even though it seems serious and important. Whatever we were all doing before, it got dropped so we could all rush into the internet. The software industry, which was not small, suddenly changed its focus, all previous projects dropped, because those projects were suddenly starved of capital, workers, and attention.

replies(2): >>rnk+9w5 >>robert+jFi
◧◩◪◨⬒
44. margor+Nx1[view] [source] [discussion] 2023-03-01 22:01:57
>>jankey+xg1
Seems like maybe many of their products are mediocre and short-lived but their engineering and research are top-notch.
◧◩◪◨
45. DiggyJ+AB1[view] [source] [discussion] 2023-03-01 22:26:29
>>goodpo+ss1
What's evil about Docker (the company)?
replies(1): >>matt_h+uK1
◧◩◪◨⬒⬓
46. rvnx+IC1[view] [source] [discussion] 2023-03-01 22:33:18
>>jay_ky+2X
The new models similar to Midjourney (Stable diffusion) and notably Deliberate v3 can perfectly draw anatomy now and you can even choose how many fingers you want.
◧◩◪◨⬒
47. matt_h+uK1[view] [source] [discussion] 2023-03-01 23:20:18
>>DiggyJ+AB1
https://news.ycombinator.com/item?id=28369570

https://news.ycombinator.com/item?id=27013865

replies(2): >>goodpo+1N1 >>DiggyJ+tz4
◧◩◪◨⬒⬓
48. goodpo+1N1[view] [source] [discussion] 2023-03-01 23:35:03
>>matt_h+uK1
The bait and switch was not nice but there's more...
◧◩◪◨⬒⬓⬔
49. ethbr0+1Z1[view] [source] [discussion] 2023-03-02 01:09:41
>>pixl97+vR
OrmGPT!
◧◩◪◨
50. jrm4+7f2[view] [source] [discussion] 2023-03-02 03:36:30
>>goodpo+ss1
Not awesome, but nowhere near the definite evil I've seen from Microsoft, Oracle, Amazon, etc....
◧◩◪
51. goatlo+mf2[view] [source] [discussion] 2023-03-02 03:37:58
>>HarHar+4P
Wouldn't the ultimate intelligence test involve manipulating the real world? That seems orders of magnitude harder than autocompletion. For a theory of everything, you would probably have to perform some experiments that don't currently exist.
replies(1): >>HarHar+W04
◧◩◪◨⬒
52. hypert+8k2[view] [source] [discussion] 2023-03-02 04:21:14
>>hypert+bY
> What's the fitness function of intelligence?

Not General, but there's IQ tests. Undergraduate examinations. Can also involve humans in the loop (though not iterate as fast), through usage of ChatGPT, CAPTCHA's, votes on reddit/hackernews/stackexchanges, even pay people to evaluate it.

Going back to the moat, even ordinary technology tends to improve, and a headstart can be maintained - provided its possible to improve it. So a question is whether ChatGPT is a kind of plateau, that can't be improved very much so others catch up while it stands still, or it's on a curve.

A significant factor is whether being ahead helps you keep ahead - a crucial thing is you gather usage data that is unavailable to followers. This data is more significant for this type of technology than any other - see above.

◧◩◪◨
53. majou+gr2[view] [source] [discussion] 2023-03-02 05:40:32
>>throwa+VP
The CEO declared sleep their competitor.
◧◩◪
54. mckrav+rH2[view] [source] [discussion] 2023-03-02 08:29:28
>>startu+bo
I was initially impressed and blown away when it could output code and fix mistakes. But the first time I tried to use it for actual work, I fed it some simple stuff and I even had the pseudo code as comments already - all it had to do is to implement it. It made tons of mistakes and trying to correct it felt like way more effort than just implementing it myself. Then that piece of code got much more complex, and I think there's no way this thing is even close to outputting something like that, unless it has seen it already. And that was ChatGPT, I have observed Copilot to be even worse.

Yes, I'm aware though it may get better but we actually don't know that yet. What if it's way harder to go from outputting junior level code with tons of mistakes to error-free complex code, than it is to go from no capability to write code to junior level code with tons of mistakes? What if it's the difference between word prediction algorithm and actual human-level intelligence?

There may be a big decrease in demand, because a lot of apps are quite simple. A lot of software out there are "template apps", stuff that can theoretically be produced by a low code app, will be eventually produced by a low code app, AI or not. When it comes to novel and complex things, I think it's not unreasonable to consider that the next 10 - 20 years will still see plenty demand for good developers.

replies(1): >>startu+bW3
◧◩◪◨
55. startu+bW3[view] [source] [discussion] 2023-03-02 16:47:56
>>mckrav+rH2
Considering that OpenAI started instruction following alignment a month ago, with 1k workers, to do engineering tasks, coding might be solved now.
replies(1): >>mckrav+0i4
◧◩◪◨
56. HarHar+W04[view] [source] [discussion] 2023-03-02 17:08:01
>>goatlo+mf2
> Wouldn't the ultimate intelligence test involve manipulating the real world?

Perhaps, although intelligence and knowledge are two separate things, so one can display intelligence over a given set of knowledge without knowing other things. Of course intelligence isn't a scalar quantity - to be super-intelligent you want to display intelligence across the widest variety/type of experience and knowledge sets - not just "book smart or street smart", but both and more.

Certainly for parity with humans you need to be able to interact with the real world, but I'm not sure it's much different or a whole lot more more complex. Instead of "predict next word" the model/robot would be doing "predict next action", followed by "predict action response". Embeddings are a very powerful type of general-purpose representation - you can embed words, but also perceptions/etc, so I don't think we're very far at all from having similar transformer-based models able to act & perceive - I'd be somewhat surprised if people aren't already experimenting with this.

replies(1): >>goatlo+id4
◧◩◪◨⬒
57. goatlo+id4[view] [source] [discussion] 2023-03-02 17:56:17
>>HarHar+W04
The biggest challenge might be the lack of training data when it comes to robotics and procedural tasks that aren't captured by language.
replies(1): >>HarHar+UO4
◧◩◪◨⬒
58. mckrav+0i4[view] [source] [discussion] 2023-03-02 18:17:40
>>startu+bW3
I have just decided to give it another try on a very straightforward thing. I have asked it to get me a Rust function that uses a headless browser to get the HTML of a fully loaded webpage.

ChatGPT:

let screenshot = tab.capture_screenshot(ScreenshotFormat::PNG, None, true).await?;

let html = String::from_utf8(screenshot).unwrap();

>[...] Once the page is fully loaded, the function captures a screenshot of the page using tab.capture_screenshot(), converts the screenshot to a string using String::from_utf8(), and then returns the string as the HTML content of the page.

Of course, it admits to the mistake (sort of, it still does not get):

> You are correct, taking a screenshot and converting it to text may not always be the best approach to extract the HTML content from a webpage.

It's hilarious.

◧◩◪◨⬒⬓
59. DiggyJ+tz4[view] [source] [discussion] 2023-03-02 19:29:02
>>matt_h+uK1
That is very far away from my personal definition of the word "evil".
◧◩◪◨⬒⬓
60. HarHar+UO4[view] [source] [discussion] 2023-03-02 20:28:39
>>goatlo+id4
Yes, and there's always the Sim2Real problem too. Ultimately these systems need to have online learning capability and actually interact with the world.
◧◩◪◨⬒⬓⬔
61. rnk+9w5[view] [source] [discussion] 2023-03-03 00:45:11
>>fsckbo+Cx1
Yes, you are correct, that's what I was trying to say.
◧◩◪◨⬒⬓⬔
62. robert+jFi[view] [source] [discussion] 2023-03-07 12:56:14
>>fsckbo+Cx1
Ah, I see. Thanks.

In terms of doctors, I think there is a counterbalancing effect of sorts, whereby some administration can be digitised and communication is more efficient, but it probably doesn't make up for the lack of additional candidates.

[go to top]