Magna Concursos

Foram encontradas 80 questões.

2613435 Ano: 2022
Disciplina: Inglês (Língua Inglesa)
Banca: IME
Orgão: IME

Text 1

XAI-Explainable artificial intelligence

Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z

Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a diverse range of fields. However, many of these systems are not able to explain their decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners.

Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate.

The (purpose) of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on.

However, every explanation is set within a context that depends (on) the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial.

Models that are fully interpretable give full and completely (transparent) explanations. Models that are partially interpretable reveal important pieces of their (reasoning) process. Interpretable models obey “interpretability constraints” that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps.

XAI assumes that an explanation is (provided) to an “end user” who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often (at) different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to (assess) the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take the target user group of the system into account, who might vary in their background knowledge and needs for what should be explained.

A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the user’s point of view, such as user (satisfaction) which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanation’s effectiveness might be task performance, i.e., does the explanation improve the user’s decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation.

(. . . )

From a human-centered research perspective, research on competencies and knowledge could take XAI (beyond) the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future (of) XAI is just beginning.

Adapted from: Science Robotics in <https://www.science.org/doi/10.1126/scirobotics.aay7120> [Accessed on 15th April 2022].

 

Provas

Questão presente nas seguintes provas
2613434 Ano: 2022
Disciplina: Inglês (Língua Inglesa)
Banca: IME
Orgão: IME

Text 1

XAI-Explainable artificial intelligence

Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S. e Yang, G-Z

Recent successes in machine learning (ML) have led to a new wave of artificial intelligence (AI) applications that offer extensive benefits to a range of fields. However, many of these systems are not able to explain their (autonomous) decisions and actions to human users. Explanations may not be essential for certain AI applications, and some AI researchers argue that the emphasis on explanation is misplaced, too difficult to achieve, and perhaps unnecessary. However, for many critical applications in defense, medicine, finance, and law, explanations are essential for users to understand, trust, and effectively manage these new, artificially intelligent partners.

Recent AI successes are largely attributed to new ML techniques that construct models in their internal representations. These include support vector machines (SVMs), random forests, probabilistic graphical models, reinforcement learning (RL), and deep learning (DL) neural networks. Although these models exhibit high performance, they are opaque in terms of explainability. There may be inherent conflict between ML performance (e.g., predictive accuracy) and explainability. Often, the highest performing methods (e.g., DL) are the least explainable, and the most explainable (e.g., decision trees) are the least accurate.

The (purpose) of an explainable AI (XAI) system is to make its behavior more intelligible to humans by providing explanations. There are some general principles to help create effective, more human-understandable AI systems: The XAI system should be able to explain its capabilities and understandings; explain what it has done, what it is doing now, and what will happen next; and disclose the salient information that it is acting on.

However, every explanation is set within a context that depends (on) the task, abilities, and expectations of the user of the AI system. The definitions of interpretability and explainability are, thus, domain dependent and may not be defined independently from a domain. Explanations can be full or partial.

Models that are fully interpretable give full and completely (transparent) explanations. Models that are partially interpretable reveal important pieces of their (reasoning) process. Interpretable models obey “interpretability constraints” that are defined according to the domain, whereas black box or unconstrained models do not necessarily obey these constraints. Partial explanations may include variable importance measures, local models that approximate global models at specific points and saliency maps.

XAI assumes that an explanation is (provided) to an “end user” who depends on the decisions, recommendations, or actions produced by an AI system yet there could be many different kinds of users, often (at) different time points in the development and use of the system. For example, a type of user might be an intelligence analyst, judge or an operator. However, other users who demand an explanation of the system might be a developer or test operator who needs to understand where there might be areas of improvements. Yet another user might be policy-makers, who are trying to (assess) the fairness of the system. Each user group may have a preferred explanation type that is able to communicate information in the most effective way. An effective explanation will take the target user group of the system into account, who might vary in their background knowledge and needs for what should be explained.

A number of ways of evaluating and measuring the effectiveness of an explanation have been proposed, however, there is currently no common means of measuring if an XAI system is more intelligible to a user than a non-XAI system. Some of these measures are subjective measures from the user’s point of view, such as user (satisfaction) which can be measured through a subjective rating of the clarity and utility of an explanation. More objective measures for an explanation’s effectiveness might be task performance, i.e., does the explanation improve the user’s decision-making. Reliable and consistent measurement of the effects of explanations is still an open research question. Evaluation and measurement for XAI systems include valuation frameworks, common ground, common sense, and argumentation.

(. . . )

From a human-centered research perspective, research on competencies and knowledge could take XAI (beyond) the role of explaining a particular XAI system and helping its users to determine appropriate trust. In the future, XAIs may eventually have substantial social roles. These roles could include not only learning and explaining to individuals but also coordinating with other agents to connect knowledge, developing cross-disciplinary insights and common ground, partnering in teaching people and other agents, and drawing on previously discovered knowledge to accelerate the further discovery and application of knowledge. From such a social perspective of knowledge understanding and generation, the future (of) XAI is just beginning.

Adapted from: Science Robotics in <https://www.science.org/doi/10.1126/scirobotics.aay7120> [Accessed on 15th April 2022].

 

Provas

Questão presente nas seguintes provas
2613433 Ano: 2022
Disciplina: Matemática
Banca: IME
Orgão: IME

Seja f(x) uma função definida em !$ \mathbb{R} !$ tal que f(1) = 1. Para todo x !$ ∈ !$ !$ \mathbb{R} !$ valem as seguintes desigualdades

f(x + 7) ⩾ f(x) + 7 e f(x + 1) ≤ f(x) + 1 .

Se g(x) = f(x − 1) − x + 2, o valor de g(2023) é

 

Provas

Questão presente nas seguintes provas
2613432 Ano: 2022
Disciplina: Matemática
Banca: IME
Orgão: IME

Um número natural é palíndromo quando é o mesmo lido da esquerda para a direita e vice-versa. Seja n um número natural palíndromo tal que 1000 ≤ n ≤ 9999. Se n é um cubo perfeito, então a soma dos algarismos de n é

 

Provas

Questão presente nas seguintes provas
2613431 Ano: 2022
Disciplina: Matemática
Banca: IME
Orgão: IME

Se a equação 2x2+cxy−3x+6y2−4y−2 = 0 representa no plano real duas retas concorrentes, então o valor positivo do número real c é

 

Provas

Questão presente nas seguintes provas
2613430 Ano: 2022
Disciplina: Matemática
Banca: IME
Orgão: IME

Seja a equação

!$ \dfrac{144^x+324^x}{64^x+729^x}=\dfrac{6}{7}. !$

A soma dos módulos das soluções reais desta equação é

 

Provas

Questão presente nas seguintes provas
2613429 Ano: 2022
Disciplina: Matemática
Banca: IME
Orgão: IME

Um triângulo ABC possui incentro I e ex-incentro G relativo ao lado !$ \overline{BC} !$. Se !$ B\widehat{I}C !$ + !$ A\widehat{G}C !$ = 155º, então o ângulo !$ A\widehat{C}B !$ é

 

Provas

Questão presente nas seguintes provas
2613428 Ano: 2022
Disciplina: Matemática
Banca: IME
Orgão: IME

Um aluno distraído desmontou um relógio. Ao remontá-lo, trocou a posição dos ponteiros das horas e dos minutos, de modo que o ponteiro das horas passou a girar com a velocidade do ponteiro dos minutos, e vice-versa. Sabendo que o relógio foi acertado para as 4 horas, o intervalo que contém o horário t que marcará a hora certa novamente pela primeira vez é

 

Provas

Questão presente nas seguintes provas
2613427 Ano: 2022
Disciplina: Matemática
Banca: IME
Orgão: IME

Considere um ponto P cujas coordenadas (x,y), x,y !$ ∈ !$ !$ \mathbb{R} !$, satisfazem o sistema

!$ \begin{cases} 4 \,cossec(α)x − 6\, cotg(α)y = 4 sen(α)\\12\, cossec(α)y − 8\, cotg(α)x = 0\end{cases} !$

onde !$ α !$ é um ângulo em radianos diferente de k!$ π !$ (!$ \mathbb{Z} !$ !$ ∈ !$ !$ \mathbb{Z} !$). O lugar geométrico descrito pelos pontos P, conforme se varia o ângulo !$ α !$, é um segmento de:

 

Provas

Questão presente nas seguintes provas
2613426 Ano: 2022
Disciplina: Matemática
Banca: IME
Orgão: IME

Um polígono regular possui 2n vértices (n !$ ∈ !$ !$ \mathbb{N} !$, n > 1). Escolhem-se ao acaso 4 vértices do polígono, formando o quadrilátero ABCD. A probabilidade de ABCD ser um retângulo é

 

Provas

Questão presente nas seguintes provas