How would you enable Cross-Origin Resource Sharing (CORS) in a Flask application?
- CORS is enabled by default in Flask
- Modify the browser's settings
- Use the "@cross_origin" decorator
- Use the Flask-CORS extension
You can enable CORS in Flask by using the Flask-CORS extension. The other options are not the recommended way to enable CORS in Flask.
How would you enable Cross-Origin Resource Sharing (CORS) in a Flask application?
- Add Access-Control-Allow-Origin header to each route manually.
- CORS is not applicable to Flask applications.
- Set CORS_ENABLED = True in the Flask app configuration.
- Use the @cross_origin decorator from the flask_cors extension.
To enable CORS in a Flask application, you typically use the @cross_origin decorator provided by the flask_cors extension. This allows you to control which origins are allowed to access your API.
How does a metaclass differ from a class in Python?
- A class can be instantiated multiple times.
- A metaclass can be instantiated multiple times.
- A metaclass defines the structure of a class, while a class defines the structure of an instance.
- A metaclass is an instance of a class.
In Python, a metaclass is a class for classes. It defines the structure and behavior of classes, while a regular class defines the structure of instances created from it. A metaclass is used to customize class creation and behavior.
How is a generator function different from a normal function in Python?
- A generator function is a built-in Python function
- A generator function is defined using the generator keyword
- A generator function returns multiple values simultaneously
- A generator function yields values lazily one at a time
A generator function differs from a normal function in that it uses the yield keyword to yield values lazily one at a time, allowing it to generate values on-the-fly without consuming excessive memory.
How would you analyze the reference count of an object in Python to debug memory issues?
- Reference count analysis is not relevant for debugging memory issues in Python.
- Use the gc module to manually increment and decrement the reference count.
- Utilize the sys.getrefcount() function to inspect the reference count.
- Write custom code to track object references in your application.
You can use the sys.getrefcount() function to inspect the reference count of an object in Python. It's a built-in way to gather information about an object's reference count. Options 1 and 4 are not recommended practices, and Option 3 is incorrect since reference count analysis is indeed relevant for debugging memory issues.
How can you find the mean of all elements in a NumPy array?
- array.mean()
- array.sum() / len(array)
- np.average(array)
- np.mean(array)
To find the mean of all elements in a NumPy array, you can use the mean() method of the array itself, like array.mean(). Alternatively, you can use np.mean(array), but the preferred way is to use the method.
How can you identify the parts of your Python code that are consuming the most time?
- Ask your colleagues for opinions.
- Consult a fortune teller.
- Rely solely on your intuition and experience.
- Use the time module to measure execution time for each section of code.
You can use the time module to measure execution time for different parts of your code. This helps pinpoint areas that need optimization. Relying on intuition or asking others may not provide accurate insights.
How can you implement a custom layer in a neural network using TensorFlow or PyTorch?
- A. Define a class that inherits from tf.keras.layers.Layer or torch.nn.Module
- B. Use only pre-defined layers
- C. Write a separate Python function
- D. Modify the source code of the framework
Option A is the correct approach to implement a custom layer in both TensorFlow (using tf.keras.layers.Layer) and PyTorch (using torch.nn.Module). This allows you to define the layer's behavior and learnable parameters. Option B limits you to pre-defined layers, and option C is not the standard way to implement custom layers. Option D is not recommended as it involves modifying the framework's source code, which is not a good practice.
How can you implement a custom loss function in a machine learning model using TensorFlow or PyTorch?
- By extending the base loss class and defining a custom loss function using mathematical operations.
- By modifying the framework's source code to include the custom loss function.
- By stacking multiple pre-built loss functions together.
- By using only the built-in loss functions provided by the framework.
To implement a custom loss function, you extend the base loss class in TensorFlow or PyTorch and define your loss using mathematical operations. This allows you to tailor the loss function to your specific problem. Modifying the framework's source code is not recommended as it can lead to maintenance issues. Stacking pre-built loss functions is possible but does not create a truly custom loss.
How can you implement a stack such that you can retrieve the minimum element in constant time?
- It's not possible
- Using a linked list
- Using a priority queue
- Using an additional stack
You can implement a stack that allows retrieving the minimum element in constant time by using an additional stack to keep track of the minimum values. Whenever you push an element onto the main stack, you compare it with the top element of the auxiliary stack and push the smaller of the two. This ensures constant-time retrieval of the minimum element.