numpy  2.0.0
src/multiarray/common.h File Reference
#include "ucsnarrow.h"

Go to the source code of this file.

Defines

#define error_converting(x)   (((x) == -1) && PyErr_Occurred())

Functions

NPY_NO_EXPORT int PyArray_DTypeFromObject (PyObject *obj, int maxdims, PyArray_Descr **out_dtype)
NPY_NO_EXPORT int PyArray_DTypeFromObjectHelper (PyObject *obj, int maxdims, PyArray_Descr **out_dtype, int string_status)
NPY_NO_EXPORT PyArray_Descr_array_find_python_scalar_type (PyObject *op)
NPY_NO_EXPORT PyArray_Descr_array_typedescr_fromstr (char *str)
NPY_NO_EXPORT int check_and_adjust_index (npy_intp *index, npy_intp max_item, int axis)
NPY_NO_EXPORT char * index2ptr (PyArrayObject *mp, npy_intp i)
NPY_NO_EXPORT int _zerofill (PyArrayObject *ret)
NPY_NO_EXPORT int _IsAligned (PyArrayObject *ap)
NPY_NO_EXPORT npy_bool _IsWriteable (PyArrayObject *ap)

Define Documentation

#define error_converting (   x)    (((x) == -1) && PyErr_Occurred())

Referenced by PyArray_FromArray().


Function Documentation

Returns NULL without setting an exception if no scalar is matched, a new dtype reference otherwise.
bools are a subclass of int
check to see if integer can fit into a longlong or ulonglong
and return that --- otherwise return object

References NPY_DOUBLE, and PyArray_DescrFromType().

Referenced by _array_from_buffer_3118().

new reference
The special casing for STRING and VOID types was removed in accordance with http://projects.scipy.org/numpy/ticket/1227 It used to be that IsAligned always returned True for these types, which is indeed the case when they are created using PyArray_DescrConverter(), but not necessarily when using PyArray_DescrAlignConverter().

Referenced by array_trace(), and PyArray_UpdateFlags().

If we own our own data, then no-problem
Get to the final base object If it is a writeable array, then return TRUE If we can find an array object or a writeable buffer object as the final base object or a string object (for pickling support memory savings).

  • this last could be removed if a proper pickleable buffer was added to Python.
    MW: I think it would better to disallow switching from READONLY
    to WRITEABLE like this...
here so pickle support works seamlessly and unpickled array can be set and reset writeable -- could be abused --

Referenced by PyArray_UpdateFlags().

References NPY_TRUE.

NPY_NO_EXPORT int check_and_adjust_index ( npy_intp index,
npy_intp  max_item,
int  axis 
)
Returns -1 and sets an exception if *index is an invalid index for an array of size max_item, otherwise adjusts it in place to be 0 <= *index < max_item, and returns 0. 'axis' should be the array axis that is being indexed over, if known. If unknown, use -1.

System Message: WARNING/2 (<string>, line 1); backlink Inline emphasis start-string without end-string.
System Message: WARNING/2 (<string>, line 1); backlink Inline emphasis start-string without end-string.
Check that index is valid, taking into account negative indices
Try to be as clear as possible about what went wrong.
adjust negative indices

References PyArray_DATA, PyArray_DESCR, PyArray_NDIM, and PyArray_STRIDES.

Referenced by parse_index_entry(), and PyArray_TakeFrom().

NPY_NO_EXPORT int PyArray_DTypeFromObject ( PyObject *  obj,
int  maxdims,
PyArray_Descr **  out_dtype 
)
Recursively examines the object to determine an appropriate dtype to use for converting to an ndarray.
'obj' is the object to be converted to an ndarray.
'maxdims' is the maximum recursion depth.
'out_dtype' should be either NULL or a minimal starting dtype when the function is called. It is updated with the results of type promotion. This dtype does not get updated when processing NA objects.
Returns 0 on success, -1 on failure.
Recursively examines the object to determine an appropriate dtype to use for converting to an ndarray.
'obj' is the object to be converted to an ndarray.
'maxdims' is the maximum recursion depth.
'out_dtype' should be either NULL or a minimal starting dtype when the function is called. It is updated with the results of type promotion. This dtype does not get updated when processing NA objects. This is reset to NULL on failure.
Returns 0 on success, -1 on failure.

References NPY_UNICODE, and PyArray_DTypeFromObjectHelper().

Referenced by _array_from_buffer_3118().

NPY_NO_EXPORT int PyArray_DTypeFromObjectHelper ( PyObject *  obj,
int  maxdims,
PyArray_Descr **  out_dtype,
int  string_status 
)
Check if it's an ndarray
Check if it's a NumPy scalar
Check if it's a Python scalar
Check if it's an ASCII string
If it's already a big enough string, don't bother type promoting
Check if it's a Unicode string
If it's already a big enough unicode object, don't bother type promoting
The array interface
The array struct interface
The old buffer interface
The __array__ attribute
Not exactly sure what this is about...
If we reached the maximum recursion depth without hitting one of the above cases, the output dtype should be OBJECT
Recursive case
Recursive call for each sequence item
Set 'out_dtype' if it's NULL
Do type promotion with 'out_dtype'

References promote_types(), and PyArray_DESCR.

Referenced by PyArray_DTypeFromObject().