Choose the answer that correctly fill in the blanks.
prediction request, prediction response
tunable request, completion
prompt, fine-tuned LLM
prompt, completion
The correct answer to fill in the blanks is "prompt, completion."
Text summarization
Information Retrieval
Translation
Invoke actions from text
The task that supports the use case of converting code comments into executable code is "Invoke actions from text."
A measure of how well a model can understand and generate human-like language.
A mechanism that allows a model to focus on different parts of the input sequence during computation.
The ability of the transformer to analyze its own performance and make adjustments accordingly.
A technique used to improve the generalization capabilities of a model by training it on diverse datasets.
The correct answer is: "A mechanism that allows a model to focus on different parts of the input sequence during computation
Defining the problem and identifying relevant datasets.
Manipulating the model to align with specific project needs.
Performing regularization
Selecting a candidate model and potentially pre-training a custom model.
Deploying the model into the infrastructure and integrating it with the application.
The stages that are part of the generative AI model lifecycle mentioned in the course are:
Is this true or false?
False.
Autoencoder
Sequence-to-sequence
Autoregressive
The transformer-based model architecture with the objective of guessing a masked token based on the previous sequence of tokens is "Autoregressive."
Autoencoder
Sequence-to-sequence
Autoregressive
The transformer-based model architecture well-suited to the task of text translation is "Sequence-to-sequence".
True
False
False. Increasing the model size is not always necessary to improve its performance. Other factors such as data quality, training duration, and optimization methods can also significantly impact model performance.
Model size: Number of parameters
Batch size: Number of samples per iteration
Compute budget: Compute constraints
Dataset size: Number of tokens
The alternatives that should be considered for scaling when performing model pre-training are:
Is this true or false?
True