Magna Concursos

Foram encontradas 45.274 questões.

3979983 Ano: 2025
Disciplina: Inglês (Língua Inglesa)
Banca: FUNDATEC
Orgão: IGP-RS
Provas:

Space power: The dream of beaming solar energy from orbit 

Enunciado 4915828-1

Enunciado 4915828-2

(Available at: www.bbc.com/future/article/20251029-the-beam-dream-should-we-build-solar-farms-in-space–

text specially adapted for this test).

Analyse the following statements, according to the grammatical structures and their meanings in the text:

I. The clause “have made it more feasible” (l. 27-28) expresses an action that began in the past and continues to have effects in the present.
II. In the sentence “It would require enormous satellite structures” (l. 21), the verb form “would require” indicates a hypothetical situation rather than a real one.
III. In the sentence “making it work is no small task” (l. 21), the structure “making it work” functions as the subject of the sentence.
IV.The structure “it was dismissed as too costly” (l. 26) refers to a past passive construction in the simple past.

Which ones are correct?
 

Provas

Questão presente nas seguintes provas
3979982 Ano: 2025
Disciplina: Inglês (Língua Inglesa)
Banca: FUNDATEC
Orgão: IGP-RS
Provas:

Space power: The dream of beaming solar energy from orbit 

Enunciado 4915827-1

Enunciado 4915827-2

(Available at: www.bbc.com/future/article/20251029-the-beam-dream-should-we-build-solar-farms-in-space–

text specially adapted for this test).

Analyse the statements below according to the vocabulary used in the text, and mark T, if true, or F, if false. 

( )The word “feasible” (l. 28) could be replaced by “achievable” without changing the meaning.
( ) The prefix un– in “uncertain” (l. 38) and “unrealistic” (l. 17) indicates reversal of action, similar to the verb “undo”.
( ) The word “viable” (l. 32) refers to something that can function successfully.
(  ) The term “renewable” (l. 14) is formed by the addition of the prefix re- and the suffix -able, which mean, respectively, “not” and “capability/possibility”.

The correct order of filling in the parentheses, from top to bottom, is:
 

Provas

Questão presente nas seguintes provas
3979981 Ano: 2025
Disciplina: Inglês (Língua Inglesa)
Banca: FUNDATEC
Orgão: IGP-RS
Provas:

Space power: The dream of beaming solar energy from orbit 

Enunciado 4915826-1

Enunciado 4915826-2

(Available at: www.bbc.com/future/article/20251029-the-beam-dream-should-we-build-solar-farms-in-space–

text specially adapted for this test).

Mark the alternative that fills in, correctly and respectively, the blanks in the text in lines 13, 16 and 33 according to standard spelling rules.
 

Provas

Questão presente nas seguintes provas
3979980 Ano: 2025
Disciplina: Inglês (Língua Inglesa)
Banca: FUNDATEC
Orgão: IGP-RS
Provas:

Space power: The dream of beaming solar energy from orbit 

Enunciado 4915825-1

Enunciado 4915825-2

(Available at: www.bbc.com/future/article/20251029-the-beam-dream-should-we-build-solar-farms-in-space–

text specially adapted for this test).

Analyse the following statements about some grammatical structures in the text:

I. The verb form “could finally make” (l. 02) expresses a future possibility.
II. The sentence “The light had been collected from the Sun” (l. 07) is in the passive voice.
III. The clause “whether such huge orbital structures would even be legal” (l. 34) expresses a condition.

Which ones are correct?
 

Provas

Questão presente nas seguintes provas
Read the following text and answer the questions.
Artificial Intelligence: The “lethal trifecta”
    LARGE LANGUAGE MODELS (LLMs), a trendy way of building artificial intelligence, have an inherent security problem: they cannot separate code from data. As a result, they are at risk of a type of attack called a prompt injection, in which they are tricked into following commands they should not. Sometimes the result is merely embarrassing, as when a customer-help agent is persuaded to talk like a pirate. On other occasions, it is far more damaging.
    The worst effects of this flaw are reserved for those who create what is known as the “lethal trifecta”. If a company, eager to offer a powerful AI assistant to its employees, gives an LLM access to untrusted data, the ability to read valuable secrets and the ability to communicate with the outside world at the same time, then trouble is sure to follow. And avoiding this is not just a matter for AI engineers. Ordinary users, too, need to learn how to use AI safely, because installing the wrong combination of apps can generate the trifecta accidentally. 
   Better AI engineering is, though, the first line of defence. And that means AI engineers need to start thinking like engineers, who build things like bridges and therefore know that shoddy work costs lives.
  The great works of Victorian England were erected by engineers who could not be sure of the properties of the materials they were using. In particular, whether by incompetence or malfeasance, the iron of the period was often not up to snuff. As a consequence, engineers erred on the side of caution, overbuilding to incorporate redundancy into their creations. The result was a series of centuries-spanning masterpieces.
   AI-security providers do not think like this. Conventional coding is a deterministic practice. Security vulnerabilities are seen as errors to be fixed, and when fixed, they go away. AI engineers, inculcated in this way of thinking from their schooldays, therefore often act as if problems can be solved just with more training data and more astute system prompts.
   These do, indeed, reduce risk. The cleverest frontier models are better at spotting and refusing malicious requests than their older or smaller cousins. But they cannot eliminate risk altogether. Unlike most software, LLMs are probabilistic. Their output is driven by random selection from likely responses. A deterministic approach to safety is thus inadequate. A better way forward is to copy engineers in the physical world and learn to work with, rather than against, capricious systems that can never be guaranteed to function as they should. That means becoming happier dealing with unpredictability by introducing safety margins, risk tolerance and error rates.
   Overbuilding in the AI age might, for instance, mean using a more powerful model than is needed for the task at hand, to reduce the risk it will be tricked into doing something inappropriate. It might mean imposing limits on the number of queries LLMs can take from external sources, calibrated to the risk of damage from a malicious query. And mechanical engineering emphasises failing safely. If an AI system must have access to secrets, then avoid handing it the keys to the kingdom.
   In the physical world, bridges have weight limits – even if they are not always stated clearly to drivers. And, importantly, these are well within the actual tolerances that calculations suggest a bridge will bear. The time has now come for the virtual world of AI systems to be similarly equipped.
Adapted from The Economist, September 27th, 2025, p. 10
The text concludes that the Victorian engineers’ decision
 

Provas

Questão presente nas seguintes provas
Read the following text and answer the questions.
Artificial Intelligence: The “lethal trifecta”
    LARGE LANGUAGE MODELS (LLMs), a trendy way of building artificial intelligence, have an inherent security problem: they cannot separate code from data. As a result, they are at risk of a type of attack called a prompt injection, in which they are tricked into following commands they should not. Sometimes the result is merely embarrassing, as when a customer-help agent is persuaded to talk like a pirate. On other occasions, it is far more damaging.
    The worst effects of this flaw are reserved for those who create what is known as the “lethal trifecta”. If a company, eager to offer a powerful AI assistant to its employees, gives an LLM access to untrusted data, the ability to read valuable secrets and the ability to communicate with the outside world at the same time, then trouble is sure to follow. And avoiding this is not just a matter for AI engineers. Ordinary users, too, need to learn how to use AI safely, because installing the wrong combination of apps can generate the trifecta accidentally. 
   Better AI engineering is, though, the first line of defence. And that means AI engineers need to start thinking like engineers, who build things like bridges and therefore know that shoddy work costs lives.
  The great works of Victorian England were erected by engineers who could not be sure of the properties of the materials they were using. In particular, whether by incompetence or malfeasance, the iron of the period was often not up to snuff. As a consequence, engineers erred on the side of caution, overbuilding to incorporate redundancy into their creations. The result was a series of centuries-spanning masterpieces.
   AI-security providers do not think like this. Conventional coding is a deterministic practice. Security vulnerabilities are seen as errors to be fixed, and when fixed, they go away. AI engineers, inculcated in this way of thinking from their schooldays, therefore often act as if problems can be solved just with more training data and more astute system prompts.
   These do, indeed, reduce risk. The cleverest frontier models are better at spotting and refusing malicious requests than their older or smaller cousins. But they cannot eliminate risk altogether. Unlike most software, LLMs are probabilistic. Their output is driven by random selection from likely responses. A deterministic approach to safety is thus inadequate. A better way forward is to copy engineers in the physical world and learn to work with, rather than against, capricious systems that can never be guaranteed to function as they should. That means becoming happier dealing with unpredictability by introducing safety margins, risk tolerance and error rates.
   Overbuilding in the AI age might, for instance, mean using a more powerful model than is needed for the task at hand, to reduce the risk it will be tricked into doing something inappropriate. It might mean imposing limits on the number of queries LLMs can take from external sources, calibrated to the risk of damage from a malicious query. And mechanical engineering emphasises failing safely. If an AI system must have access to secrets, then avoid handing it the keys to the kingdom.
   In the physical world, bridges have weight limits – even if they are not always stated clearly to drivers. And, importantly, these are well within the actual tolerances that calculations suggest a bridge will bear. The time has now come for the virtual world of AI systems to be similarly equipped.
Adapted from The Economist, September 27th, 2025, p. 10
The metaphor used in avoid handing it the keys to the kingdom (7th paragraph) means avoid giving the system
 

Provas

Questão presente nas seguintes provas
Read the following text and answer the questions.
Artificial Intelligence: The “lethal trifecta”
    LARGE LANGUAGE MODELS (LLMs), a trendy way of building artificial intelligence, have an inherent security problem: they cannot separate code from data. As a result, they are at risk of a type of attack called a prompt injection, in which they are tricked into following commands they should not. Sometimes the result is merely embarrassing, as when a customer-help agent is persuaded to talk like a pirate. On other occasions, it is far more damaging.
    The worst effects of this flaw are reserved for those who create what is known as the “lethal trifecta”. If a company, eager to offer a powerful AI assistant to its employees, gives an LLM access to untrusted data, the ability to read valuable secrets and the ability to communicate with the outside world at the same time, then trouble is sure to follow. And avoiding this is not just a matter for AI engineers. Ordinary users, too, need to learn how to use AI safely, because installing the wrong combination of apps can generate the trifecta accidentally. 
   Better AI engineering is, though, the first line of defence. And that means AI engineers need to start thinking like engineers, who build things like bridges and therefore know that shoddy work costs lives.
  The great works of Victorian England were erected by engineers who could not be sure of the properties of the materials they were using. In particular, whether by incompetence or malfeasance, the iron of the period was often not up to snuff. As a consequence, engineers erred on the side of caution, overbuilding to incorporate redundancy into their creations. The result was a series of centuries-spanning masterpieces.
   AI-security providers do not think like this. Conventional coding is a deterministic practice. Security vulnerabilities are seen as errors to be fixed, and when fixed, they go away. AI engineers, inculcated in this way of thinking from their schooldays, therefore often act as if problems can be solved just with more training data and more astute system prompts.
   These do, indeed, reduce risk. The cleverest frontier models are better at spotting and refusing malicious requests than their older or smaller cousins. But they cannot eliminate risk altogether. Unlike most software, LLMs are probabilistic. Their output is driven by random selection from likely responses. A deterministic approach to safety is thus inadequate. A better way forward is to copy engineers in the physical world and learn to work with, rather than against, capricious systems that can never be guaranteed to function as they should. That means becoming happier dealing with unpredictability by introducing safety margins, risk tolerance and error rates.
   Overbuilding in the AI age might, for instance, mean using a more powerful model than is needed for the task at hand, to reduce the risk it will be tricked into doing something inappropriate. It might mean imposing limits on the number of queries LLMs can take from external sources, calibrated to the risk of damage from a malicious query. And mechanical engineering emphasises failing safely. If an AI system must have access to secrets, then avoid handing it the keys to the kingdom.
   In the physical world, bridges have weight limits – even if they are not always stated clearly to drivers. And, importantly, these are well within the actual tolerances that calculations suggest a bridge will bear. The time has now come for the virtual world of AI systems to be similarly equipped.
Adapted from The Economist, September 27th, 2025, p. 10
Introducing in by introducing safety margins (6th paragraph) is similar in meaning to
 

Provas

Questão presente nas seguintes provas
Read the following text and answer the questions.
Artificial Intelligence: The “lethal trifecta”
    LARGE LANGUAGE MODELS (LLMs), a trendy way of building artificial intelligence, have an inherent security problem: they cannot separate code from data. As a result, they are at risk of a type of attack called a prompt injection, in which they are tricked into following commands they should not. Sometimes the result is merely embarrassing, as when a customer-help agent is persuaded to talk like a pirate. On other occasions, it is far more damaging.
    The worst effects of this flaw are reserved for those who create what is known as the “lethal trifecta”. If a company, eager to offer a powerful AI assistant to its employees, gives an LLM access to untrusted data, the ability to read valuable secrets and the ability to communicate with the outside world at the same time, then trouble is sure to follow. And avoiding this is not just a matter for AI engineers. Ordinary users, too, need to learn how to use AI safely, because installing the wrong combination of apps can generate the trifecta accidentally. 
   Better AI engineering is, though, the first line of defence. And that means AI engineers need to start thinking like engineers, who build things like bridges and therefore know that shoddy work costs lives.
  The great works of Victorian England were erected by engineers who could not be sure of the properties of the materials they were using. In particular, whether by incompetence or malfeasance, the iron of the period was often not up to snuff. As a consequence, engineers erred on the side of caution, overbuilding to incorporate redundancy into their creations. The result was a series of centuries-spanning masterpieces.
   AI-security providers do not think like this. Conventional coding is a deterministic practice. Security vulnerabilities are seen as errors to be fixed, and when fixed, they go away. AI engineers, inculcated in this way of thinking from their schooldays, therefore often act as if problems can be solved just with more training data and more astute system prompts.
   These do, indeed, reduce risk. The cleverest frontier models are better at spotting and refusing malicious requests than their older or smaller cousins. But they cannot eliminate risk altogether. Unlike most software, LLMs are probabilistic. Their output is driven by random selection from likely responses. A deterministic approach to safety is thus inadequate. A better way forward is to copy engineers in the physical world and learn to work with, rather than against, capricious systems that can never be guaranteed to function as they should. That means becoming happier dealing with unpredictability by introducing safety margins, risk tolerance and error rates.
   Overbuilding in the AI age might, for instance, mean using a more powerful model than is needed for the task at hand, to reduce the risk it will be tricked into doing something inappropriate. It might mean imposing limits on the number of queries LLMs can take from external sources, calibrated to the risk of damage from a malicious query. And mechanical engineering emphasises failing safely. If an AI system must have access to secrets, then avoid handing it the keys to the kingdom.
   In the physical world, bridges have weight limits – even if they are not always stated clearly to drivers. And, importantly, these are well within the actual tolerances that calculations suggest a bridge will bear. The time has now come for the virtual world of AI systems to be similarly equipped.
Adapted from The Economist, September 27th, 2025, p. 10
The phrase shoddy work costs lives (3rd paragraph) refers to work that is
 

Provas

Questão presente nas seguintes provas
Read the following text and answer the questions.
Artificial Intelligence: The “lethal trifecta”
    LARGE LANGUAGE MODELS (LLMs), a trendy way of building artificial intelligence, have an inherent security problem: they cannot separate code from data. As a result, they are at risk of a type of attack called a prompt injection, in which they are tricked into following commands they should not. Sometimes the result is merely embarrassing, as when a customer-help agent is persuaded to talk like a pirate. On other occasions, it is far more damaging.
    The worst effects of this flaw are reserved for those who create what is known as the “lethal trifecta”. If a company, eager to offer a powerful AI assistant to its employees, gives an LLM access to untrusted data, the ability to read valuable secrets and the ability to communicate with the outside world at the same time, then trouble is sure to follow. And avoiding this is not just a matter for AI engineers. Ordinary users, too, need to learn how to use AI safely, because installing the wrong combination of apps can generate the trifecta accidentally. 
   Better AI engineering is, though, the first line of defence. And that means AI engineers need to start thinking like engineers, who build things like bridges and therefore know that shoddy work costs lives.
  The great works of Victorian England were erected by engineers who could not be sure of the properties of the materials they were using. In particular, whether by incompetence or malfeasance, the iron of the period was often not up to snuff. As a consequence, engineers erred on the side of caution, overbuilding to incorporate redundancy into their creations. The result was a series of centuries-spanning masterpieces.
   AI-security providers do not think like this. Conventional coding is a deterministic practice. Security vulnerabilities are seen as errors to be fixed, and when fixed, they go away. AI engineers, inculcated in this way of thinking from their schooldays, therefore often act as if problems can be solved just with more training data and more astute system prompts.
   These do, indeed, reduce risk. The cleverest frontier models are better at spotting and refusing malicious requests than their older or smaller cousins. But they cannot eliminate risk altogether. Unlike most software, LLMs are probabilistic. Their output is driven by random selection from likely responses. A deterministic approach to safety is thus inadequate. A better way forward is to copy engineers in the physical world and learn to work with, rather than against, capricious systems that can never be guaranteed to function as they should. That means becoming happier dealing with unpredictability by introducing safety margins, risk tolerance and error rates.
   Overbuilding in the AI age might, for instance, mean using a more powerful model than is needed for the task at hand, to reduce the risk it will be tricked into doing something inappropriate. It might mean imposing limits on the number of queries LLMs can take from external sources, calibrated to the risk of damage from a malicious query. And mechanical engineering emphasises failing safely. If an AI system must have access to secrets, then avoid handing it the keys to the kingdom.
   In the physical world, bridges have weight limits – even if they are not always stated clearly to drivers. And, importantly, these are well within the actual tolerances that calculations suggest a bridge will bear. The time has now come for the virtual world of AI systems to be similarly equipped.
Adapted from The Economist, September 27th, 2025, p. 10
The word tricked (1st paragraph) means that LLMs can be
 

Provas

Questão presente nas seguintes provas
Read the following text and answer the questions.
Artificial Intelligence: The “lethal trifecta”
    LARGE LANGUAGE MODELS (LLMs), a trendy way of building artificial intelligence, have an inherent security problem: they cannot separate code from data. As a result, they are at risk of a type of attack called a prompt injection, in which they are tricked into following commands they should not. Sometimes the result is merely embarrassing, as when a customer-help agent is persuaded to talk like a pirate. On other occasions, it is far more damaging.
    The worst effects of this flaw are reserved for those who create what is known as the “lethal trifecta”. If a company, eager to offer a powerful AI assistant to its employees, gives an LLM access to untrusted data, the ability to read valuable secrets and the ability to communicate with the outside world at the same time, then trouble is sure to follow. And avoiding this is not just a matter for AI engineers. Ordinary users, too, need to learn how to use AI safely, because installing the wrong combination of apps can generate the trifecta accidentally. 
   Better AI engineering is, though, the first line of defence. And that means AI engineers need to start thinking like engineers, who build things like bridges and therefore know that shoddy work costs lives.
  The great works of Victorian England were erected by engineers who could not be sure of the properties of the materials they were using. In particular, whether by incompetence or malfeasance, the iron of the period was often not up to snuff. As a consequence, engineers erred on the side of caution, overbuilding to incorporate redundancy into their creations. The result was a series of centuries-spanning masterpieces.
   AI-security providers do not think like this. Conventional coding is a deterministic practice. Security vulnerabilities are seen as errors to be fixed, and when fixed, they go away. AI engineers, inculcated in this way of thinking from their schooldays, therefore often act as if problems can be solved just with more training data and more astute system prompts.
   These do, indeed, reduce risk. The cleverest frontier models are better at spotting and refusing malicious requests than their older or smaller cousins. But they cannot eliminate risk altogether. Unlike most software, LLMs are probabilistic. Their output is driven by random selection from likely responses. A deterministic approach to safety is thus inadequate. A better way forward is to copy engineers in the physical world and learn to work with, rather than against, capricious systems that can never be guaranteed to function as they should. That means becoming happier dealing with unpredictability by introducing safety margins, risk tolerance and error rates.
   Overbuilding in the AI age might, for instance, mean using a more powerful model than is needed for the task at hand, to reduce the risk it will be tricked into doing something inappropriate. It might mean imposing limits on the number of queries LLMs can take from external sources, calibrated to the risk of damage from a malicious query. And mechanical engineering emphasises failing safely. If an AI system must have access to secrets, then avoid handing it the keys to the kingdom.
   In the physical world, bridges have weight limits – even if they are not always stated clearly to drivers. And, importantly, these are well within the actual tolerances that calculations suggest a bridge will bear. The time has now come for the virtual world of AI systems to be similarly equipped.
Adapted from The Economist, September 27th, 2025, p. 10
By referring to LLMs as a trendy way of building artificial intelligence (1st paragraph), the author implies they are
 

Provas

Questão presente nas seguintes provas