Magna Concursos

Foram encontradas 160 questões.

2993183 Ano: 2023
Disciplina: TI - Banco de Dados
Banca: FGV
Orgão: TCE-SP

Atenção

Quando referidas, considere as tabelas relacionais TX e TY, criadas e instanciadas com o script SQL a seguir.

create table TY(C int primary key not null, A int)

create table TX(A int primary key not null, B int,

foreign key (B) references TY(C)

on delete cascade

)

insert into TY values (1,0)

insert into TY(C) values (2)

insert into TY(C) values (3)

insert into TY values (5,NULL)

insert into TY values (6,NULL)

insert into TX values (1,2)

insert into TX values (2,1)

insert into TX values (3,2)

insert into TX values (4,2)

Com referência às tabelas TX e TY, como descritas anteriormente, analise o comando SQL a seguir.

insert into TX(A, B)

select C,A FROM TY

where C not in (select A from TX)

or A in (select A from TX)

O conjunto de linhas inseridas é:

 

Provas

Questão presente nas seguintes provas
2993182 Ano: 2023
Disciplina: TI - Banco de Dados
Banca: FGV
Orgão: TCE-SP

Atenção

Quando referidas, considere as tabelas relacionais TX e TY, criadas e instanciadas com o script SQL a seguir.

create table TY(C int primary key not null, A int)

create table TX(A int primary key not null, B int,

foreign key (B) references TY(C)

on delete cascade

)

insert into TY values (1,0)

insert into TY(C) values (2)

insert into TY(C) values (3)

insert into TY values (5,NULL)

insert into TY values (6,NULL)

insert into TX values (1,2)

insert into TX values (2,1)

insert into TX values (3,2)

insert into TX values (4,2)

Com referência às tabelas TX e TY, como descritas anteriormente, analise o comando SQL a seguir.

select count(*)

from TX t1 left join TY t2 on t1.B=t2.A

O valor exibido pela execução desse comando é:

 

Provas

Questão presente nas seguintes provas
2993181 Ano: 2023
Disciplina: Redação Oficial
Banca: FGV
Orgão: TCE-SP

Um Manual de Redação traz o seguinte exemplo de redação de um aluno:

Todo o mundo vem ao colégio com roupas bastante informais porque, é claro, acham que o colégio é como se fosse a nossa própria casa, isto é, um lugar que as pessoas devem se sentir à vontade.”

Nesse fragmento da redação há uma série de problemas; a observação que traz um erro em lugar de fazer uma correção adequada, é:

 

Provas

Questão presente nas seguintes provas
2993180 Ano: 2023
Disciplina: Direito Digital
Banca: FGV
Orgão: TCE-SP

O sistema SisBRAVO foi desenvolvido aderente ao preconizado na Lei nº 13.709/2018 – Lei Geral de Proteção de Dados (LGPD). O SisBRAVO solicita autorização para coleta de dados pessoais inseridos pelos usuários.

Sendo assim, o SisBRAVO atende requisitos tipificados como:

 

Provas

Questão presente nas seguintes provas
2993179 Ano: 2023
Disciplina: Direito Digital
Banca: FGV
Orgão: TCE-SP

O TCE SP deseja aprimorar a gestão de pessoal utilizando um novo software. Para isso, o Setor Geral de Pessoal delegou à Diretoria de Gestão de Pessoas (DGP) a tarefa de determinar os meios pelos quais este software será implementado. A DGP decidiu contratar a empresa SisPesSoft para desenvolver o software em parceria com a equipe interna da Diretoria de Tecnologia da Informação.

De acordo com a Lei nº 13.709/2018 – Lei Geral de Proteção de Dados (LGPD), nesse contexto, a empresa SisPesSoft atua como:

 

Provas

Questão presente nas seguintes provas
2993178 Ano: 2023
Disciplina: Estatística
Banca: FGV
Orgão: TCE-SP

Considere um dado cúbico com as faces numeradas de 1 a 6 tal que, quando lançado, todas as faces têm a mesma probabilidade de ocorrer. Quando esse dado é lançado 3 vezes consecutivas, a probabilidade de que a soma dos números sorteados seja igual a 7 é !$ \dfrac {N} {216} !$.

O valor de N é:

 

Provas

Questão presente nas seguintes provas
2993177 Ano: 2023
Disciplina: Estatística
Banca: FGV
Orgão: TCE-SP

Uma peça é colocada na casa 1 de um tabuleiro de 10 casas. Ela se move com a seguinte regra de probabilidade: a peça avança uma casa se um número par é obtido no lançamento de um dado e a peça avança duas casas se o número obtido for ímpar. Seja C(j) a probabilidade de a peça cair na casa j.

Então, é correto afirmar que:

 

Provas

Questão presente nas seguintes provas
2993176 Ano: 2023
Disciplina: Inglês (Língua Inglesa)
Banca: FGV
Orgão: TCE-SP

READ THE TEXT AND ANSWER THE FOLLOWING QUESTION:

Chatbots could be used to steal data, says cybersecurity agency

The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.

The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.

The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.

Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.

For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.

Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.

According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.

The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”

The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.

Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

“If” in “if they find a combination of words” (5th paragraph) signals a:

 

Provas

Questão presente nas seguintes provas
2993175 Ano: 2023
Disciplina: Inglês (Língua Inglesa)
Banca: FGV
Orgão: TCE-SP

READ THE TEXT AND ANSWER THE FOLLOWING QUESTION:

Chatbots could be used to steal data, says cybersecurity agency

The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.

The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.

The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.

Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.

For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.

Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.

According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.

The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”

The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.

Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

In “Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard” (4th paragraph), “such as” introduces a(n):

 

Provas

Questão presente nas seguintes provas
2993174 Ano: 2023
Disciplina: Inglês (Língua Inglesa)
Banca: FGV
Orgão: TCE-SP

READ THE TEXT AND ANSWER THE FOLLOWING QUESTION:

Chatbots could be used to steal data, says cybersecurity agency

The UK’s cybersecurity agency has warned that there is an increasing risk that chatbots could be manipulated by hackers.

The National Cyber Security Centre (NCSC) has said that individuals could manipulate the prompts of chatbots, which run on artificial intelligence by creating a language model and give answers to questions by users, through “prompt injection” attacks that would make them behave in an unintended manner.

The point of a chatbot is to mimic human-like conversations, which it has been trained to do through scraping large amounts of data. Commonly used in online banking or online shopping, chatbots are generally designed to handle simple requests.

Large language models, such as OpenAI’s ChatGPT and Google’s AI chatbot Bard, are trained using data that generates human-like responses to user prompts. Since chatbots are used to pass data to third-party applications and services, the NCSC has said that risks from malicious “prompt injection” will grow.

For instance, if a user inputs a statement or question that a language model is not familiar with, or if they find a combination of words to override the model’s original script or prompts, the user can cause the model to perform unintended actions.

Such inputs could cause a chatbot to generate offensive content or reveal confidential information in a system that accepts unchecked input.

According to the NCSC, prompt injection attacks can also cause real world consequences, if systems are not designed with security. The vulnerability of chatbots and the ease with which prompts can be manipulated could cause attacks, scams and data theft. The large language models are increasingly used to pass data to third-party applications and services, meaning the risks from malicious prompt injection will grow.

The NCSC said: “Prompt injection and data poisoning attacks can be extremely difficult to detect and mitigate. However, no model exists in isolation, so what we can do is design the whole system with security in mind.”

The NCSC said that cyber-attacks caused by artificial intelligence and machine learning that leaves systems vulnerable can be mitigated through designing for security and understanding the attack techniques that exploit “inherent vulnerabilities” in machine learning algorithm.

Adapted from: The Guardian, Wednesday 30 August 2023, page 4.

According to the text, attacks, scams and data theft are actions that should be:

 

Provas

Questão presente nas seguintes provas