Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:
hash
stringlengths
40
40
repo
stringlengths
9
36
date
stringlengths
19
19
license
stringclasses
3 values
message
stringlengths
86
367
mods
listlengths
1
15
c27d31c06520c3df4c820ea10d5d16316f4d88cb
cupy/cupy
19.07.2017 16:24:41
MIT License
Support CUDA stream on memory pool Now, memory pool will have an arena (bins) for each stream to avoid concurrent streams touch the same memory block
[ { "change_type": "MODIFY", "diff": "@@ -1,4 +1,5 @@\n from libcpp cimport vector\n+from libcpp cimport unordered_map\n \n from cupy.cuda cimport device\n \n@@ -11,6 +12,7 @@ cdef class Chunk:\n readonly size_t ptr\n readonly Py_ssize_t offset\n readonly Py_ssize_t size\n+ public object stream_ptr\n public Chunk prev\n public Chunk next\n \n@@ -22,15 +24,16 @@ cdef class MemoryPointer:\n readonly size_t ptr\n \n cpdef copy_from_device(self, MemoryPointer src, Py_ssize_t size)\n- cpdef copy_from_device_async(self, MemoryPointer src, size_t size, stream)\n+ cpdef copy_from_device_async(self, MemoryPointer src, size_t size,\n+ stream=?)\n cpdef copy_from_host(self, mem, size_t size)\n- cpdef copy_from_host_async(self, mem, size_t size, stream)\n+ cpdef copy_from_host_async(self, mem, size_t size, stream=?)\n cpdef copy_from(self, mem, size_t size)\n- cpdef copy_from_async(self, mem, size_t size, stream)\n+ cpdef copy_from_async(self, mem, size_t size, stream=?)\n cpdef copy_to_host(self, mem, size_t size)\n- cpdef copy_to_host_async(self, mem, size_t size, stream)\n+ cpdef copy_to_host_async(self, mem, size_t size, stream=?)\n cpdef memset(self, int value, size_t size)\n- cpdef memset_async(self, int value, size_t size, stream)\n+ cpdef memset_async(self, int value, size_t size, stream=?)\n \n \n cpdef MemoryPointer alloc(Py_ssize_t size)\n@@ -44,14 +47,14 @@ cdef class SingleDeviceMemoryPool:\n cdef:\n object _allocator\n dict _in_use\n- list _free\n+ dict _free\n object __weakref__\n object _weakref\n object _free_lock\n object _in_use_lock\n readonly Py_ssize_t _allocation_unit_size\n readonly int _device_id\n- vector.vector[int] _index\n+ unordered_map.unordered_map[size_t, vector.vector[int]] _index\n \n cpdef MemoryPointer _alloc(self, Py_ssize_t size)\n cpdef MemoryPointer malloc(self, Py_ssize_t size)\n@@ -65,8 +68,11 @@ cdef class SingleDeviceMemoryPool:\n cpdef total_bytes(self)\n cpdef Py_ssize_t _round_size(self, Py_ssize_t size)\n cpdef int _bin_index_from_size(self, Py_ssize_t size)\n- cpdef _append_to_free_list(self, Py_ssize_t size, chunk)\n- cpdef bint _remove_from_free_list(self, Py_ssize_t size, chunk) except *\n+ cpdef list _arena(self, size_t stream_ptr)\n+ cdef vector.vector[int]* _arena_index(self, size_t stream_ptr)\n+ cpdef _append_to_free_list(self, Py_ssize_t size, chunk, size_t stream_ptr)\n+ cpdef bint _remove_from_free_list(self, Py_ssize_t size,\n+ chunk, size_t stream_ptr) except *\n cpdef tuple _split(self, Chunk chunk, Py_ssize_t size)\n cpdef Chunk _merge(self, Chunk head, Chunk remaining)\n \n", "new_path": "cupy/cuda/memory.pxd", "old_path": "cupy/cuda/memory.pxd" }, { "change_type": "MODIFY", "diff": "@@ -10,6 +10,7 @@ from libcpp cimport algorithm\n \n from cupy.cuda import memory_hook\n from cupy.cuda import runtime\n+from cupy.cuda import stream as stream_module\n \n from cupy.cuda cimport device\n from cupy.cuda cimport runtime\n@@ -126,24 +127,27 @@ cdef class Chunk:\n mem (Memory): The device memory buffer.\n offset (int): An offset bytes from the head of the buffer.\n size (int): Chunk size in bytes.\n+ stream_ptr (size_t): Raw stream handle of cupy.cuda.Stream\n \n Attributes:\n device (cupy.cuda.Device): Device whose memory the pointer refers to.\n mem (Memory): The device memory buffer.\n- ptr (int): Memory address.\n+ ptr (size_t): Memory address.\n offset (int): An offset bytes from the head of the buffer.\n size (int): Chunk size in bytes.\n prev (Chunk): prev memory pointer if split from a larger allocation\n next (Chunk): next memory pointer if split from a larger allocation\n+ stream_ptr (size_t): Raw stream handle of cupy.cuda.Stream\n \"\"\"\n \n- def __init__(self, mem, Py_ssize_t offset, Py_ssize_t size):\n+ def __init__(self, mem, Py_ssize_t offset, Py_ssize_t size, stream_ptr):\n assert mem.ptr > 0 or offset == 0\n self.mem = mem\n self.device = mem.device\n self.ptr = mem.ptr + offset\n self.offset = offset\n self.size = size\n+ self.stream_ptr = stream_ptr\n self.prev = None\n self.next = None\n \n@@ -163,7 +167,7 @@ cdef class MemoryPointer:\n ~MemoryPointer.device (cupy.cuda.Device): Device whose memory the\n pointer refers to.\n mem (Memory): The device memory buffer.\n- ptr (int): Pointer to the place within the buffer.\n+ ptr (size_t): Pointer to the place within the buffer.\n \"\"\"\n \n def __init__(self, mem, Py_ssize_t offset):\n@@ -217,15 +221,19 @@ cdef class MemoryPointer:\n runtime.memcpy(self.ptr, src.ptr, size,\n runtime.memcpyDefault)\n \n- cpdef copy_from_device_async(self, MemoryPointer src, size_t size, stream):\n+ cpdef copy_from_device_async(self, MemoryPointer src, size_t size,\n+ stream=None):\n \"\"\"Copies a memory from a (possibly different) device asynchronously.\n \n Args:\n src (cupy.cuda.MemoryPointer): Source memory pointer.\n size (int): Size of the sequence in bytes.\n stream (cupy.cuda.Stream): CUDA stream.\n+ The default uses CUDA stream of the current context.\n \n \"\"\"\n+ if stream is None:\n+ stream = stream_module.get_current_stream()\n if size > 0:\n _set_peer_access(src.device.id, self.device.id)\n runtime.memcpyAsync(self.ptr, src.ptr, size,\n@@ -243,7 +251,7 @@ cdef class MemoryPointer:\n runtime.memcpy(self.ptr, mem.value, size,\n runtime.memcpyHostToDevice)\n \n- cpdef copy_from_host_async(self, mem, size_t size, stream):\n+ cpdef copy_from_host_async(self, mem, size_t size, stream=None):\n \"\"\"Copies a memory sequence from the host memory asynchronously.\n \n Args:\n@@ -251,8 +259,11 @@ cdef class MemoryPointer:\n memory.\n size (int): Size of the sequence in bytes.\n stream (cupy.cuda.Stream): CUDA stream.\n+ The default uses CUDA stream of the current context.\n \n \"\"\"\n+ if stream is None:\n+ stream = stream_module.get_current_stream()\n if size > 0:\n runtime.memcpyAsync(self.ptr, mem.value, size,\n runtime.memcpyHostToDevice, stream.ptr)\n@@ -275,7 +286,7 @@ cdef class MemoryPointer:\n else:\n self.copy_from_host(mem, size)\n \n- cpdef copy_from_async(self, mem, size_t size, stream):\n+ cpdef copy_from_async(self, mem, size_t size, stream=None):\n \"\"\"Copies a memory sequence from an arbitrary place asynchronously.\n \n This function is a useful interface that selects appropriate one from\n@@ -287,8 +298,11 @@ cdef class MemoryPointer:\n pointer.\n size (int): Size of the sequence in bytes.\n stream (cupy.cuda.Stream): CUDA stream.\n+ The default uses CUDA stream of the current context.\n \n \"\"\"\n+ if stream is None:\n+ stream = stream_module.get_current_stream()\n if isinstance(mem, MemoryPointer):\n self.copy_from_device_async(mem, size, stream)\n else:\n@@ -306,7 +320,7 @@ cdef class MemoryPointer:\n runtime.memcpy(mem.value, self.ptr, size,\n runtime.memcpyDeviceToHost)\n \n- cpdef copy_to_host_async(self, mem, size_t size, stream):\n+ cpdef copy_to_host_async(self, mem, size_t size, stream=None):\n \"\"\"Copies a memory sequence to the host memory asynchronously.\n \n Args:\n@@ -314,8 +328,11 @@ cdef class MemoryPointer:\n memory.\n size (int): Size of the sequence in bytes.\n stream (cupy.cuda.Stream): CUDA stream.\n+ The default uses CUDA stream of the current context.\n \n \"\"\"\n+ if stream is None:\n+ stream = stream_module.get_current_stream()\n if size > 0:\n runtime.memcpyAsync(mem.value, self.ptr, size,\n runtime.memcpyDeviceToHost, stream.ptr)\n@@ -331,15 +348,18 @@ cdef class MemoryPointer:\n if size > 0:\n runtime.memset(self.ptr, value, size)\n \n- cpdef memset_async(self, int value, size_t size, stream):\n+ cpdef memset_async(self, int value, size_t size, stream=None):\n \"\"\"Fills a memory sequence by constant byte value asynchronously.\n \n Args:\n value (int): Value to fill.\n size (int): Size of the sequence in bytes.\n stream (cupy.cuda.Stream): CUDA stream.\n+ The default uses CUDA stream of the current context.\n \n \"\"\"\n+ if stream is None:\n+ stream = stream_module.get_current_stream()\n if size > 0:\n runtime.memsetAsync(self.ptr, value, size, stream.ptr)\n \n@@ -482,7 +502,7 @@ cdef class SingleDeviceMemoryPool:\n # cf. https://gist.github.com/sonots/41daaa6432b1c8b27ef782cd14064269\n self._allocation_unit_size = 512\n self._in_use = {}\n- self._free = []\n+ self._free = {}\n self._allocator = allocator\n self._weakref = weakref.ref(self)\n self._device_id = device.get_device_id()\n@@ -499,38 +519,62 @@ cdef class SingleDeviceMemoryPool:\n unit = self._allocation_unit_size\n return (size - 1) // unit\n \n- cpdef _append_to_free_list(self, Py_ssize_t size, chunk):\n+ cpdef list _arena(self, size_t stream_ptr):\n+ \"\"\"Get appropriate arena (list of bins) of a given stream\"\"\"\n+ if stream_ptr not in self._free:\n+ self._free[stream_ptr] = []\n+ return self._free[stream_ptr]\n+\n+ cdef vector.vector[int]* _arena_index(self, size_t stream_ptr):\n+ \"\"\"Get appropriate arena sparse index of a given stream\"\"\"\n+ if self._index.count(stream_ptr) == 0:\n+ self._index[stream_ptr] = vector.vector[int]()\n+ return &self._index[stream_ptr]\n+\n+ cpdef _append_to_free_list(self, Py_ssize_t size, chunk,\n+ size_t stream_ptr):\n cdef int index, bin_index\n+ cdef list arena\n cdef set free_list\n+ cdef vector.vector[int]* arena_index\n+\n bin_index = self._bin_index_from_size(size)\n rlock.lock_fastrlock(self._free_lock, -1, True)\n try:\n+ arena = self._arena(stream_ptr)\n+ arena_index = self._arena_index(stream_ptr)\n index = algorithm.lower_bound(\n- self._index.begin(), self._index.end(),\n- bin_index) - self._index.begin()\n- if index < self._index.size() and self._index[index] == bin_index:\n- free_list = self._free[index]\n+ arena_index.begin(), arena_index.end(),\n+ bin_index) - arena_index.begin()\n+ size = <int>arena_index.size()\n+ if index < size and arena_index.at(index) == bin_index:\n+ free_list = arena[index]\n else:\n free_list = set()\n- self._index.insert(\n- self._index.begin() + index, bin_index)\n- self._free.insert(index, free_list)\n+ arena_index.insert(arena_index.begin() + index, bin_index)\n+ arena.insert(index, free_list)\n free_list.add(chunk)\n finally:\n rlock.unlock_fastrlock(self._free_lock)\n \n- cpdef bint _remove_from_free_list(self, Py_ssize_t size, chunk) except *:\n+ cpdef bint _remove_from_free_list(self, Py_ssize_t size, chunk,\n+ size_t stream_ptr) except *:\n cdef int index, bin_index\n+ cdef list arena\n cdef set free_list\n+ cdef vector.vector[int]* arena_index\n+\n bin_index = self._bin_index_from_size(size)\n rlock.lock_fastrlock(self._free_lock, -1, True)\n try:\n+ arena = self._arena(stream_ptr)\n+ arena_index = self._arena_index(stream_ptr)\n index = algorithm.lower_bound(\n- self._index.begin(), self._index.end(),\n- bin_index) - self._index.begin()\n- if self._index[index] != bin_index:\n+ arena_index.begin(), arena_index.end(),\n+ bin_index) - arena_index.begin()\n+ if arena_index.at(index) != bin_index:\n return False\n- free_list = self._free[index]\n+ free_list = arena[index]\n if chunk in free_list:\n free_list.remove(chunk)\n return True\n@@ -545,8 +589,9 @@ cdef class SingleDeviceMemoryPool:\n assert chunk.size >= size\n if chunk.size == size:\n return chunk, None\n- head = Chunk(chunk.mem, chunk.offset, size)\n- remaining = Chunk(chunk.mem, chunk.offset + size, chunk.size - size)\n+ head = Chunk(chunk.mem, chunk.offset, size, chunk.stream_ptr)\n+ remaining = Chunk(chunk.mem, chunk.offset + size, chunk.size - size,\n+ chunk.stream_ptr)\n if chunk.prev is not None:\n head.prev = chunk.prev\n chunk.prev.next = head\n@@ -559,9 +604,10 @@ cdef class SingleDeviceMemoryPool:\n \n cpdef Chunk _merge(self, Chunk head, Chunk remaining):\n \"\"\"Merge previously splitted block (chunk)\"\"\"\n+ assert head.stream_ptr == remaining.stream_ptr\n cdef Chunk merged\n size = head.size + remaining.size\n- merged = Chunk(head.mem, head.offset, size)\n+ merged = Chunk(head.mem, head.offset, size, head.stream_ptr)\n if head.prev is not None:\n merged.prev = head.prev\n merged.prev.next = merged\n@@ -630,16 +676,20 @@ cdef class SingleDeviceMemoryPool:\n if size == 0:\n return MemoryPointer(Memory(0), 0)\n \n+ stream_ptr = stream_module.get_current_stream().ptr\n+\n+ bin_index = self._bin_index_from_size(size)\n # find best-fit, or a smallest larger allocation\n rlock.lock_fastrlock(self._free_lock, -1, True)\n- bin_index = self._bin_index_from_size(size)\n try:\n+ arena = self._arena(stream_ptr)\n+ arena_index = self._arena_index(stream_ptr)\n index = algorithm.lower_bound(\n- self._index.begin(), self._index.end(),\n- bin_index) - self._index.begin()\n- length = self._index.size()\n+ arena_index.begin(), arena_index.end(),\n+ bin_index) - arena_index.begin()\n+ length = arena_index.size()\n for i in range(index, length):\n- free_list = self._free[i]\n+ free_list = arena[i]\n if free_list:\n chunk = free_list.pop()\n break\n@@ -670,15 +720,16 @@ cdef class SingleDeviceMemoryPool:\n else:\n total = size + self.total_bytes()\n raise OutOfMemoryError(size, total)\n- chunk = Chunk(mem, 0, size)\n+ chunk = Chunk(mem, 0, size, stream_ptr)\n \n+ assert chunk.stream_ptr == stream_ptr\n rlock.lock_fastrlock(self._in_use_lock, -1, True)\n try:\n self._in_use[chunk.ptr] = chunk\n finally:\n rlock.unlock_fastrlock(self._in_use_lock)\n if remaining is not None:\n- self._append_to_free_list(remaining.size, remaining)\n+ self._append_to_free_list(remaining.size, remaining, stream_ptr)\n pmem = PooledMemory(chunk, self._weakref)\n return MemoryPointer(pmem, 0)\n \n@@ -693,16 +744,19 @@ cdef class SingleDeviceMemoryPool:\n rlock.unlock_fastrlock(self._in_use_lock)\n if chunk is None:\n raise RuntimeError('Cannot free out-of-pool memory')\n+ stream_ptr = chunk.stream_ptr\n \n if chunk.next is not None:\n- if self._remove_from_free_list(chunk.next.size, chunk.next):\n+ if self._remove_from_free_list(chunk.next.size, chunk.next,\n+ stream_ptr):\n chunk = self._merge(chunk, chunk.next)\n \n if chunk.prev is not None:\n- if self._remove_from_free_list(chunk.prev.size, chunk.prev):\n+ if self._remove_from_free_list(chunk.prev.size, chunk.prev,\n+ stream_ptr):\n chunk = self._merge(chunk.prev, chunk)\n \n- self._append_to_free_list(chunk.size, chunk)\n+ self._append_to_free_list(chunk.size, chunk, stream_ptr)\n \n cpdef free_all_blocks(self):\n cdef set free_list, keep_list\n@@ -710,13 +764,14 @@ cdef class SingleDeviceMemoryPool:\n # Free all **non-split** chunks\n rlock.lock_fastrlock(self._free_lock, -1, True)\n try:\n- for i in range(len(self._free)):\n- free_list = self._free[i]\n- keep_list = set()\n- for chunk in free_list:\n- if chunk.prev is not None or chunk.next is not None:\n- keep_list.add(chunk)\n- self._free[i] = keep_list\n+ for arena in self._free.itervalues():\n+ for i in range(len(arena)):\n+ free_list = arena[i]\n+ keep_list = set()\n+ for chunk in free_list:\n+ if chunk.prev is not None or chunk.next is not None:\n+ keep_list.add(chunk)\n+ arena[i] = keep_list\n finally:\n rlock.unlock_fastrlock(self._free_lock)\n \n@@ -731,8 +786,9 @@ cdef class SingleDeviceMemoryPool:\n cdef set free_list\n rlock.lock_fastrlock(self._free_lock, -1, True)\n try:\n- for free_list in self._free:\n- n += len(free_list)\n+ for arena in self._free.itervalues():\n+ for v in arena:\n+ n += len(v)\n finally:\n rlock.unlock_fastrlock(self._free_lock)\n return n\n@@ -754,9 +810,10 @@ cdef class SingleDeviceMemoryPool:\n cdef Chunk chunk\n rlock.lock_fastrlock(self._free_lock, -1, True)\n try:\n- for free_list in self._free:\n- for chunk in free_list:\n- size += chunk.size\n+ for arena in self._free.itervalues():\n+ for free_list in arena:\n+ for chunk in free_list:\n+ size += chunk.size\n finally:\n rlock.unlock_fastrlock(self._free_lock)\n return size\n", "new_path": "cupy/cuda/memory.pyx", "old_path": "cupy/cuda/memory.pyx" }, { "change_type": "MODIFY", "diff": "@@ -3,6 +3,7 @@ import unittest\n \n import cupy.cuda\n from cupy.cuda import memory\n+from cupy.cuda import stream as stream_module\n from cupy import testing\n \n \n@@ -105,6 +106,8 @@ class TestSingleDeviceMemoryPool(unittest.TestCase):\n def setUp(self):\n self.pool = memory.SingleDeviceMemoryPool(allocator=mock_alloc)\n self.unit = self.pool._allocation_unit_size\n+ self.stream = stream_module.Stream()\n+ self.stream_ptr = self.stream.ptr\n \n def test_round_size(self):\n self.assertEqual(self.pool._round_size(self.unit - 1), self.unit)\n@@ -118,46 +121,52 @@ class TestSingleDeviceMemoryPool(unittest.TestCase):\n \n def test_split(self):\n mem = MockMemory(self.unit * 4)\n- chunk = memory.Chunk(mem, 0, mem.size)\n+ chunk = memory.Chunk(mem, 0, mem.size, self.stream_ptr)\n head, tail = self.pool._split(chunk, self.unit * 2)\n- self.assertEqual(head.ptr, chunk.ptr)\n- self.assertEqual(head.offset, 0)\n- self.assertEqual(head.size, self.unit * 2)\n- self.assertEqual(head.prev, None)\n+ self.assertEqual(head.ptr, chunk.ptr)\n+ self.assertEqual(head.offset, 0)\n+ self.assertEqual(head.size, self.unit * 2)\n+ self.assertEqual(head.prev, None)\n self.assertEqual(head.next.ptr, tail.ptr)\n- self.assertEqual(tail.ptr, chunk.ptr + self.unit * 2)\n- self.assertEqual(tail.offset, self.unit * 2)\n- self.assertEqual(tail.size, self.unit * 2)\n+ self.assertEqual(head.stream_ptr, self.stream_ptr)\n+ self.assertEqual(tail.ptr, chunk.ptr + self.unit * 2)\n+ self.assertEqual(tail.offset, self.unit * 2)\n+ self.assertEqual(tail.size, self.unit * 2)\n self.assertEqual(tail.prev.ptr, head.ptr)\n- self.assertEqual(tail.next, None)\n+ self.assertEqual(tail.next, None)\n+ self.assertEqual(tail.stream_ptr, self.stream_ptr)\n \n head_of_head, tail_of_head = self.pool._split(head, self.unit)\n- self.assertEqual(head_of_head.ptr, chunk.ptr)\n- self.assertEqual(head_of_head.offset, 0)\n- self.assertEqual(head_of_head.size, self.unit)\n- self.assertEqual(head_of_head.prev, None)\n+ self.assertEqual(head_of_head.ptr, chunk.ptr)\n+ self.assertEqual(head_of_head.offset, 0)\n+ self.assertEqual(head_of_head.size, self.unit)\n+ self.assertEqual(head_of_head.prev, None)\n self.assertEqual(head_of_head.next.ptr, tail_of_head.ptr)\n- self.assertEqual(tail_of_head.ptr, chunk.ptr + self.unit)\n- self.assertEqual(tail_of_head.offset, self.unit)\n- self.assertEqual(tail_of_head.size, self.unit)\n+ self.assertEqual(head_of_head.stream_ptr, self.stream_ptr)\n+ self.assertEqual(tail_of_head.ptr, chunk.ptr + self.unit)\n+ self.assertEqual(tail_of_head.offset, self.unit)\n+ self.assertEqual(tail_of_head.size, self.unit)\n self.assertEqual(tail_of_head.prev.ptr, head_of_head.ptr)\n self.assertEqual(tail_of_head.next.ptr, tail.ptr)\n+ self.assertEqual(tail_of_head.stream_ptr, self.stream_ptr)\n \n head_of_tail, tail_of_tail = self.pool._split(tail, self.unit)\n- self.assertEqual(head_of_tail.ptr, chunk.ptr + self.unit * 2)\n- self.assertEqual(head_of_tail.offset, self.unit * 2)\n- self.assertEqual(head_of_tail.size, self.unit)\n+ self.assertEqual(head_of_tail.ptr, chunk.ptr + self.unit * 2)\n+ self.assertEqual(head_of_tail.offset, self.unit * 2)\n+ self.assertEqual(head_of_tail.size, self.unit)\n self.assertEqual(head_of_tail.prev.ptr, tail_of_head.ptr)\n self.assertEqual(head_of_tail.next.ptr, tail_of_tail.ptr)\n- self.assertEqual(tail_of_tail.ptr, chunk.ptr + self.unit * 3)\n- self.assertEqual(tail_of_tail.offset, self.unit * 3)\n- self.assertEqual(tail_of_tail.size, self.unit)\n+ self.assertEqual(head_of_tail.stream_ptr, self.stream_ptr)\n+ self.assertEqual(tail_of_tail.ptr, chunk.ptr + self.unit * 3)\n+ self.assertEqual(tail_of_tail.offset, self.unit * 3)\n+ self.assertEqual(tail_of_tail.size, self.unit)\n self.assertEqual(tail_of_tail.prev.ptr, head_of_tail.ptr)\n- self.assertEqual(tail_of_tail.next, None)\n+ self.assertEqual(tail_of_tail.next, None)\n+ self.assertEqual(tail_of_tail.stream_ptr, self.stream_ptr)\n \n def test_merge(self):\n mem = MockMemory(self.unit * 4)\n- chunk = memory.Chunk(mem, 0, mem.size)\n+ chunk = memory.Chunk(mem, 0, mem.size, self.stream_ptr)\n \n head, tail = self.pool._split(chunk, self.unit * 2)\n head_ptr, tail_ptr = head.ptr, tail.ptr\n@@ -165,25 +174,28 @@ class TestSingleDeviceMemoryPool(unittest.TestCase):\n head_of_tail, tail_of_tail = self.pool._split(tail, self.unit)\n \n merged_head = self.pool._merge(head_of_head, tail_of_head)\n- self.assertEqual(merged_head.ptr, head.ptr)\n- self.assertEqual(merged_head.offset, head.offset)\n- self.assertEqual(merged_head.size, head.size)\n- self.assertEqual(merged_head.prev, None)\n+ self.assertEqual(merged_head.ptr, head.ptr)\n+ self.assertEqual(merged_head.offset, head.offset)\n+ self.assertEqual(merged_head.size, head.size)\n+ self.assertEqual(merged_head.prev, None)\n self.assertEqual(merged_head.next.ptr, tail_ptr)\n+ self.assertEqual(merged_head.stream_ptr, self.stream_ptr)\n \n merged_tail = self.pool._merge(head_of_tail, tail_of_tail)\n- self.assertEqual(merged_tail.ptr, tail.ptr)\n- self.assertEqual(merged_tail.offset, tail.offset)\n- self.assertEqual(merged_tail.size, tail.size)\n+ self.assertEqual(merged_tail.ptr, tail.ptr)\n+ self.assertEqual(merged_tail.offset, tail.offset)\n+ self.assertEqual(merged_tail.size, tail.size)\n self.assertEqual(merged_tail.prev.ptr, head_ptr)\n- self.assertEqual(merged_tail.next, None)\n+ self.assertEqual(merged_tail.next, None)\n+ self.assertEqual(merged_tail.stream_ptr, self.stream_ptr)\n \n merged = self.pool._merge(merged_head, merged_tail)\n- self.assertEqual(merged.ptr, chunk.ptr)\n+ self.assertEqual(merged.ptr, chunk.ptr)\n self.assertEqual(merged.offset, chunk.offset)\n- self.assertEqual(merged.size, chunk.size)\n- self.assertEqual(merged.prev, None)\n- self.assertEqual(merged.next, None)\n+ self.assertEqual(merged.size, chunk.size)\n+ self.assertEqual(merged.prev, None)\n+ self.assertEqual(merged.next, None)\n+ self.assertEqual(merged.stream_ptr, self.stream_ptr)\n \n def test_alloc(self):\n p1 = self.pool.malloc(self.unit * 4)\n@@ -209,6 +221,14 @@ class TestSingleDeviceMemoryPool(unittest.TestCase):\n p2 = self.pool.malloc(self.unit * 4)\n self.assertEqual(ptr1, p2.ptr)\n \n+ def test_free_stream(self):\n+ p1 = self.pool.malloc(self.unit * 4)\n+ ptr1 = p1.ptr\n+ del p1\n+ with self.stream:\n+ p2 = self.pool.malloc(self.unit * 4)\n+ self.assertNotEqual(ptr1, p2.ptr)\n+\n def test_free_merge(self):\n p = self.pool.malloc(self.unit * 4)\n ptr = p.ptr\n@@ -250,7 +270,10 @@ class TestSingleDeviceMemoryPool(unittest.TestCase):\n self.assertNotEqual(ptr1, p2.ptr)\n del p2\n \n+ def test_free_all_blocks_split(self):\n # do not free splitted blocks\n+ p = self.pool.malloc(self.unit * 4)\n+ del p\n head = self.pool.malloc(self.unit * 2)\n tail = self.pool.malloc(self.unit * 2)\n tailptr = tail.ptr\n@@ -260,6 +283,23 @@ class TestSingleDeviceMemoryPool(unittest.TestCase):\n self.assertEqual(tailptr, p.ptr)\n del head\n \n+ def test_free_all_blocks_stream(self):\n+ p1 = self.pool.malloc(self.unit * 4)\n+ ptr1 = p1.ptr\n+ del p1\n+ with self.stream:\n+ p2 = self.pool.malloc(self.unit * 4)\n+ ptr2 = p2.ptr\n+ del p2\n+ self.pool.free_all_blocks()\n+ p3 = self.pool.malloc(self.unit * 4)\n+ self.assertNotEqual(ptr1, p3.ptr)\n+ self.assertNotEqual(ptr2, p3.ptr)\n+ with self.stream:\n+ p4 = self.pool.malloc(self.unit * 4)\n+ self.assertNotEqual(ptr1, p4.ptr)\n+ self.assertNotEqual(ptr2, p4.ptr)\n+\n def test_free_all_free(self):\n p1 = self.pool.malloc(self.unit * 4)\n ptr1 = p1.ptr\n@@ -282,6 +322,14 @@ class TestSingleDeviceMemoryPool(unittest.TestCase):\n self.assertEqual(self.unit * 1, self.pool.used_bytes())\n del p3\n \n+ def test_used_bytes_stream(self):\n+ p1 = self.pool.malloc(self.unit * 4)\n+ del p1\n+ with self.stream:\n+ p2 = self.pool.malloc(self.unit * 2)\n+ self.assertEqual(self.unit * 2, self.pool.used_bytes())\n+ del p2\n+\n def test_free_bytes(self):\n p1 = self.pool.malloc(self.unit * 2)\n self.assertEqual(self.unit * 0, self.pool.free_bytes())\n@@ -295,6 +343,14 @@ class TestSingleDeviceMemoryPool(unittest.TestCase):\n self.assertEqual(self.unit * 5, self.pool.free_bytes())\n del p3\n \n+ def test_free_bytes_stream(self):\n+ p1 = self.pool.malloc(self.unit * 4)\n+ del p1\n+ with self.stream:\n+ p2 = self.pool.malloc(self.unit * 2)\n+ self.assertEqual(self.unit * 4, self.pool.free_bytes())\n+ del p2\n+\n def test_total_bytes(self):\n p1 = self.pool.malloc(self.unit * 2)\n self.assertEqual(self.unit * 2, self.pool.total_bytes())\n@@ -308,6 +364,14 @@ class TestSingleDeviceMemoryPool(unittest.TestCase):\n self.assertEqual(self.unit * 6, self.pool.total_bytes())\n del p3\n \n+ def test_total_bytes_stream(self):\n+ p1 = self.pool.malloc(self.unit * 4)\n+ del p1\n+ with self.stream:\n+ p2 = self.pool.malloc(self.unit * 2)\n+ self.assertEqual(self.unit * 6, self.pool.total_bytes())\n+ del p2\n+\n \n @testing.parameterize(*testing.product({\n 'allocator': [memory._malloc, memory.malloc_managed],\n", "new_path": "tests/cupy_tests/cuda_tests/test_memory.py", "old_path": "tests/cupy_tests/cuda_tests/test_memory.py" } ]
6683a9aa7bae67e855cd9d1f17fdc49eb3f6dea0
cupy/cupy
17.06.2020 22:41:09
MIT License
Complete overhaul of filter testing. These tests are much more flexible now for when additional filters are added.
[ { "change_type": "MODIFY", "diff": "@@ -11,359 +11,349 @@ try:\n except ImportError:\n pass\n \n-# ######### Testing convolve and correlate ##########\n \n+class FilterTestCaseBase(unittest.TestCase):\n+ \"\"\"\n+ Add some utility methods for the parameterized tests for filters. these\n+ assume there are the \"parameters\" self.filter, self.wdtype or self.dtype,\n+ and self.ndim, self.kshape, or self.shape. Other optional \"parameters\" are\n+ also used if available like self.footprint when the filter is a filter\n+ that uses the footprint. These methods allow testing across multiple\n+ filter types much more easily.\n+ \"\"\"\n+\n+ # default param values if not provided\n+ filter = 'convolve'\n+ shape = (4, 5)\n+ ksize = 3\n+ dtype = numpy.float64\n+ footprint = True\n \[email protected](*(\n- testing.product({\n- 'shape': [(3, 4), (2, 3, 4), (1, 2, 3, 4)],\n- 'ksize': [3, 4],\n- 'mode': ['reflect'],\n- 'cval': [0.0],\n- 'origin': [0, 1, None],\n- 'adtype': [numpy.int8, numpy.int16, numpy.int32,\n- numpy.float32, numpy.float64],\n- 'wdtype': [None, numpy.int32, numpy.float64],\n- 'output': [None, numpy.int32, numpy.float64],\n- 'filter': ['convolve', 'correlate']\n- }) + testing.product({\n- 'shape': [(3, 4), (2, 3, 4), (1, 2, 3, 4)],\n- 'ksize': [3, 4],\n- 'mode': ['constant'],\n- 'cval': [-1.0, 0.0, 1.0],\n- 'origin': [0],\n- 'adtype': [numpy.int32, numpy.float64],\n- 'wdtype': [None],\n- 'output': [None],\n- 'filter': ['convolve', 'correlate']\n- }) + testing.product({\n- 'shape': [(3, 4), (2, 3, 4), (1, 2, 3, 4)],\n- 'ksize': [3, 4],\n- 'mode': ['nearest', 'mirror', 'wrap'],\n- 'cval': [0.0],\n- 'origin': [0],\n- 'adtype': [numpy.int32, numpy.float64],\n- 'wdtype': [None],\n- 'output': [None],\n- 'filter': ['convolve', 'correlate']\n- })\n-))\[email protected]\[email protected]_requires('scipy')\n-class TestConvolveAndCorrelate(unittest.TestCase):\n \n- def _filter(self, xp, scp, a, w):\n+ # Params that need no processing and just go into kwargs\n+ KWARGS_PARAMS = ('output', 'axis', 'mode', 'cval')\n+\n+\n+ def _filter(self, xp, scp):\n+ \"\"\"\n+ The function that all tests end up calling, possibly after a few\n+ adjustments to the class \"parameters\".\n+ \"\"\"\n+ # The filter function\n filter = getattr(scp.ndimage, self.filter)\n- if self.origin is None:\n- origin = (-1, 1, -1, 1)[:a.ndim]\n- else:\n- origin = self.origin\n- return filter(a, w, output=self.output, mode=self.mode,\n- cval=self.cval, origin=origin)\n \n- @testing.numpy_cupy_allclose(atol=1e-5, rtol=1e-5, scipy_name='scp')\n- def test_convolve_and_correlate(self, xp, scp):\n- if 1 in self.shape and self.mode == 'mirror':\n- raise unittest.SkipTest(\"requires scipy>1.5.0, tested later\")\n- if self.adtype == self.wdtype or self.adtype == self.output:\n- raise unittest.SkipTest(\"redundant\")\n- a = testing.shaped_random(self.shape, xp, self.adtype)\n- if self.wdtype is None:\n- wdtype = self.adtype\n- else:\n- wdtype = self.wdtype\n- w = testing.shaped_random((self.ksize,) * a.ndim, xp, wdtype)\n- return self._filter(xp, scp, a, w)\n+ # The kwargs to pass to the filter function\n+ kwargs = {param:getattr(self, param)\n+ for param in FilterTestCaseBase.KWARGS_PARAMS\n+ if hasattr(self, param)}\n+ if hasattr(self, 'origin'):\n+ kwargs['origin'] = self._origin\n+\n+ # The array we are filtering\n+ arr = testing.shaped_random(self.shape, xp, self.dtype)\n+\n+ # The weights we are using to filter\n+ wghts = self._get_weights(xp)\n+ if isinstance(wghts, tuple) and len(wghts) == 2 and wghts[0] is None:\n+ # w is actually a tuple of (None, footprint)\n+ wghts, kwargs['footprint'] = wghts\n+\n+ # Actually perform filtering\n+ return filter(arr, wghts, **kwargs)\n+\n+\n+ def _get_weights(self, xp):\n+ # Gets the second argument to the filter functions.\n+ # For convolve/correlate/convolve1d/correlate1d this is the weights.\n+ # For minimum_filter1d/maximum_filter1d this is the kernel size.\n+ #\n+ # For minimum_filter/maximum_filter this is a bit more complicated and\n+ # is either the kernel size or a tuple of None and the footprint. The\n+ # _filter() method knows about this and handles it automatically.\n+\n+ if self.filter in ('convolve', 'correlate'):\n+ return testing.shaped_random(self._kshape, xp, self._dtype)\n+\n+ if self.filter in ('convolve1d', 'correlate1d'):\n+ return testing.shaped_random((self.ksize,), xp, self._dtype)\n+\n+ if self.filter in ('minimum_filter', 'maximum_filter'):\n+ if not self.footprint:\n+ return self.ksize\n+ kshape = self._kshape\n+ footprint = testing.shaped_random(kshape, xp, scale=1) > 0.5\n+ if not footprint.any():\n+ footprint = xp.ones(kshape)\n+ return None, footprint\n \n+ if self.filter in ('minimum_filter1d', 'maximum_filter1d'):\n+ return self.ksize\n \[email protected](*testing.product({\n- 'shape': [(1, 2, 3, 4)],\n+ raise RuntimeError('unsupported filter name')\n+\n+\n+ @property\n+ def _dtype(self):\n+ return getattr(self, 'wdtype', None) or self.dtype\n+\n+\n+ @property\n+ def _ndim(self):\n+ return getattr(self, 'ndim', len(getattr(self, 'shape', [])))\n+\n+\n+ @property\n+ def _kshape(self):\n+ return getattr(self, 'kshape', (self.ksize,) * self._ndim)\n+\n+\n+ @property\n+ def _origin(self):\n+ origin = getattr(self, 'origin', 0)\n+ if origin is not None:\n+ return origin\n+ is_1d = self.filter.endswith('1d')\n+ return -1 if is_1d else (-1, 1, -1, 1)[:self._ndim]\n+\n+\n+# Parameters common across all modes (with some overrides)\n+COMMON_PARAMS = {\n+ 'shape': [(4, 5), (3, 4, 5), (1, 3, 4, 5)],\n 'ksize': [3, 4],\n 'dtype': [numpy.int32, numpy.float64],\n- 'filter': ['convolve', 'correlate']\n-}))\[email protected]\n-# SciPy behavior fixed in 1.5.0: https://github.com/scipy/scipy/issues/11661\[email protected]_requires('scipy>=1.5.0')\n-class TestConvolveAndCorrelateMirrorDim1(unittest.TestCase):\n- @testing.numpy_cupy_allclose(atol=1e-5, rtol=1e-5, scipy_name='scp')\n- def test_convolve_and_correlate(self, xp, scp):\n- a = testing.shaped_random(self.shape, xp, self.dtype)\n- w = testing.shaped_random((self.ksize,) * a.ndim, xp, self.dtype)\n- filter = getattr(scp.ndimage, self.filter)\n- return filter(a, w, output=None, mode='mirror', cval=0.0, origin=0)\n+}\n \n \[email protected](*testing.product({\n- 'ndim': [2, 3],\n- 'dtype': [numpy.int32, numpy.float64],\n- 'filter': ['convolve', 'correlate']\n-}))\n+# The bulk of the tests are done with this class\[email protected](*(\n+ testing.product([\n+ # Filter-function specific params\n+ testing.product({\n+ 'filter': ['convolve', 'correlate'],\n+ }) + testing.product({\n+ 'filter': ['convolve1d', 'correlate1d',\n+ 'minimum_filter1d', 'maximum_filter1d'],\n+ 'axis': [0, 1, -1],\n+ }) + testing.product({\n+ 'filter': ['minimum_filter', 'maximum_filter'],\n+ 'footprint': [False, True],\n+ }),\n+\n+ # Mode-specific params\n+ testing.product({\n+ **COMMON_PARAMS,\n+ 'mode': ['reflect'],\n+ # With reflect test some of the other parameters as well\n+ 'origin': [0, 1, None],\n+ 'output': [None, numpy.int32, numpy.float64],\n+ 'dtype': [numpy.uint8, numpy.int16, numpy.int32,\n+ numpy.float32, numpy.float64],\n+ }) + testing.product({\n+ **COMMON_PARAMS,\n+ 'mode': ['constant'], 'cval': [-1.0, 0.0, 1.0],\n+ }) + testing.product({\n+ **COMMON_PARAMS,\n+ 'mode': ['nearest', 'wrap'],\n+ }) + testing.product({\n+ **COMMON_PARAMS,\n+ 'shape': [(4, 5), (3, 4, 5)], # no (1,3,4,5) here due to scipy bug\n+ 'mode': ['mirror'],\n+ })\n+ ])\n+))\n @testing.gpu\n @testing.with_requires('scipy')\n-class TestConvolveAndCorrelateSpecialCases(unittest.TestCase):\n+class TestFilter(FilterTestCaseBase):\n+ @testing.numpy_cupy_allclose(atol=1e-5, rtol=1e-5, scipy_name='scp')\n+ def test_filter(self, xp, scp):\n+ if self.dtype == getattr(self, 'output', None):\n+ raise unittest.SkipTest(\"redundant\")\n+ return self._filter(xp, scp)\n \n- def _filter(self, scp, a, w, mode='reflect', origin=0):\n- filter = getattr(scp.ndimage, self.filter)\n- return filter(a, w, mode=mode, origin=origin)\n \n+# Tests things requiring scipy >= 1.5.0\[email protected](*(\n+ testing.product([\n+ # Filter-function specific params\n+ testing.product({\n+ 'filter': ['convolve', 'correlate'],\n+ }) + testing.product({\n+ 'filter': ['convolve1d', 'correlate1d',\n+ 'minimum_filter1d', 'maximum_filter1d'],\n+ 'axis': [0, 1, -1],\n+ }) + testing.product({\n+ 'filter': ['minimum_filter', 'maximum_filter'],\n+ 'footprint': [False, True],\n+ }),\n+\n+ # Mode-specific params\n+ testing.product({\n+ **COMMON_PARAMS,\n+ 'shape': [(1, 3, 4, 5)],\n+ 'mode': ['mirror'],\n+ })\n+ ])\n+))\[email protected]\n+# SciPy behavior fixed in 1.5.0: https://github.com/scipy/scipy/issues/11661\[email protected]_requires('scipy>=1.5.0')\n+class TestMirrorWithDim1(FilterTestCaseBase):\n @testing.numpy_cupy_allclose(atol=1e-5, rtol=1e-5, scipy_name='scp')\n- def test_weights_with_size_zero_dim(self, xp, scp):\n- a = testing.shaped_random((3, ) * self.ndim, xp, self.dtype)\n- w = testing.shaped_random((0, ) + (3, ) * self.ndim, xp, self.dtype)\n- return self._filter(scp, a, w)\n-\n- def test_invalid_shape_weights(self):\n- a = testing.shaped_random((3, ) * self.ndim, cupy, self.dtype)\n- w = testing.shaped_random((3, ) * (self.ndim - 1), cupy, self.dtype)\n- with self.assertRaises(RuntimeError):\n- self._filter(cupyx.scipy, a, w)\n- w = testing.shaped_random((0, ) + (3, ) * (self.ndim - 1), cupy,\n- self.dtype)\n- with self.assertRaises(RuntimeError):\n- self._filter(cupyx.scipy, a, w)\n-\n- def test_invalid_mode(self):\n- a = testing.shaped_random((3, ) * self.ndim, cupy, self.dtype)\n- w = testing.shaped_random((3, ) * self.ndim, cupy, self.dtype)\n- with self.assertRaises(RuntimeError):\n- self._filter(cupyx.scipy, a, w, mode='unknown')\n-\n- # SciPy behavior fixed in 1.2.0: https://github.com/scipy/scipy/issues/822\n- @testing.with_requires('scipy>=1.2.0')\n- def test_invalid_origin(self):\n- a = testing.shaped_random((3, ) * self.ndim, cupy, self.dtype)\n- for lenw in [3, 4]:\n- w = testing.shaped_random((lenw, ) * self.ndim, cupy, self.dtype)\n- for origin in range(-3, 4):\n- if (lenw // 2 + origin < 0) or (lenw // 2 + origin >= lenw):\n- with self.assertRaises(ValueError):\n- self._filter(cupyx.scipy, a, w, origin=origin)\n- else:\n- self._filter(cupyx.scipy, a, w, origin=origin)\n-\n-\n-# ######### Testing convolve1d and correlate1d ##########\n+ def test_filter(self, xp, scp):\n+ return self._filter(xp, scp)\n \n \n+# Tests with weight dtypes that are distinct from the input and output dtypes\n @testing.parameterize(*(\n- testing.product({\n- 'shape': [(3, 4), (2, 3, 4), (1, 2, 3, 4)],\n- 'ksize': [3, 4],\n- 'axis': [0, 1, -1],\n- 'mode': ['reflect'],\n- 'cval': [0.0],\n- 'origin': [0, 1, -1],\n- 'adtype': [numpy.int8, numpy.int16, numpy.int32,\n- numpy.float32, numpy.float64],\n- 'wdtype': [None, numpy.int32, numpy.float64],\n- 'output': [None, numpy.int32, numpy.float64],\n- 'filter': ['convolve1d', 'correlate1d']\n- }) + testing.product({\n- 'shape': [(3, 4), (2, 3, 4), (1, 2, 3, 4)],\n- 'ksize': [3, 4],\n- 'axis': [0, 1, -1],\n- 'mode': ['constant'],\n- 'cval': [-1.0, 0.0, 1.0],\n- 'origin': [0],\n- 'adtype': [numpy.int32, numpy.float64],\n- 'wdtype': [None],\n- 'output': [None],\n- 'filter': ['convolve1d', 'correlate1d']\n- }) + testing.product({\n- 'shape': [(3, 4), (2, 3, 4), (1, 2, 3, 4)],\n- 'ksize': [3, 4],\n- 'axis': [0, 1, -1],\n- 'mode': ['nearest', 'mirror', 'wrap'],\n- 'cval': [0.0],\n- 'origin': [0],\n- 'adtype': [numpy.int32, numpy.float64],\n- 'wdtype': [None],\n- 'output': [None],\n- 'filter': ['convolve1d', 'correlate1d']\n- })\n+ testing.product([\n+ testing.product({\n+ 'filter': ['convolve', 'correlate'],\n+ }) + testing.product({\n+ 'filter': ['convolve1d', 'correlate1d'],\n+ 'axis': [0, 1, -1],\n+ }),\n+ testing.product({\n+ **COMMON_PARAMS,\n+ 'mode': ['reflect'],\n+ 'output': [None, numpy.int32, numpy.float64],\n+ 'dtype': [numpy.uint8, numpy.int16, numpy.int32,\n+ numpy.float32, numpy.float64],\n+ 'wdtype': [numpy.int32, numpy.float64],\n+ })\n+ ])\n ))\n @testing.gpu\n @testing.with_requires('scipy')\n-class TestConvolve1DAndCorrelate1D(unittest.TestCase):\n-\n- def _filter(self, xp, scp, a, w):\n- filter = getattr(scp.ndimage, self.filter)\n- return filter(a, w, axis=self.axis, output=self.output, mode=self.mode,\n- cval=self.cval, origin=self.origin)\n-\n+class TestWeightDtype(FilterTestCaseBase):\n @testing.numpy_cupy_allclose(atol=1e-5, rtol=1e-5, scipy_name='scp')\n- def test_convolve1d_and_correlate1d(self, xp, scp):\n- if 1 in self.shape and self.mode == 'mirror':\n- raise unittest.SkipTest(\"requires scipy>1.5.0, tested later\")\n- if self.adtype == self.wdtype or self.adtype == self.output:\n+ def test_filter(self, xp, scp):\n+ if self.dtype == self.wdtype:\n raise unittest.SkipTest(\"redundant\")\n- a = testing.shaped_random(self.shape, xp, self.adtype)\n- if self.wdtype is None:\n- wdtype = self.adtype\n- else:\n- wdtype = self.wdtype\n- w = testing.shaped_random((self.ksize,), xp, wdtype)\n- return self._filter(xp, scp, a, w)\n+ return self._filter(xp, scp)\n \n \n+# Tests special weights (ND)\n @testing.parameterize(*testing.product({\n- 'shape': [(1, 2, 3, 4)],\n- 'ksize': [3, 4],\n- 'axis': [0, 1, -1],\n+ 'filter': ['convolve', 'correlate', 'minimum_filter', 'maximum_filter'],\n+ 'shape': [(3, 3), (3, 3, 3)],\n 'dtype': [numpy.int32, numpy.float64],\n- 'filter': ['convolve1d', 'correlate1d']\n }))\n @testing.gpu\n-# SciPy behavior fixed in 1.5.0: https://github.com/scipy/scipy/issues/11661\[email protected]_requires('scipy>=1.5.0')\n-class TestConvolveAndCorrelateMirrorDim1(unittest.TestCase):\[email protected]_requires('scipy')\n+class TestSpecialWeightCases(FilterTestCaseBase):\n @testing.numpy_cupy_allclose(atol=1e-5, rtol=1e-5, scipy_name='scp')\n- def test_convolve_and_correlate(self, xp, scp):\n- a = testing.shaped_random(self.shape, xp, self.dtype)\n- w = testing.shaped_random((self.ksize,) * a.ndim, xp, self.dtype)\n- filter = getattr(scp.ndimage, self.filter)\n- return filter(a, w, axis=self.axis, output=None, mode='mirror',\n- cval=0.0, origin=0)\n+ #@testing.numpy_cupy_raises(scipy_name='scp', accept_error=ValueError)\n+ def test_extra_0_dim(self, xp, scp):\n+ # NOTE: minimum/maximum_filter raise ValueError but convolve/correlate\n+ # return an array of zeroes the same shape as the input. This will\n+ # handle both and only pass is both numpy and cupy do the same thing.\n+ self.kshape = (0,) + self.shape\n+ try:\n+ return self._filter(xp, scp)\n+ except ValueError:\n+ return xp.zeros((0,)) #xp.zeros(self.shape)\n+\n \n+ @testing.numpy_cupy_raises(scipy_name='scp', accept_error=RuntimeError)\n+ def test_missing_dim(self, xp, scp):\n+ self.kshape = self.shape[1:]\n+ return self._filter(xp, scp)\n \n+\n+ @testing.numpy_cupy_raises(scipy_name='scp', accept_error=RuntimeError)\n+ def test_extra_dim(self, xp, scp):\n+ self.kshape = self.shape[:1] + self.shape\n+ return self._filter(xp, scp)\n+\n+\n+ @testing.numpy_cupy_raises(scipy_name='scp', accept_error=(RuntimeError,\n+ ValueError))\n+ def test_replace_dim_with_0(self, xp, scp):\n+ self.kshape = (0,) + self.shape[1:]\n+ return self._filter(xp, scp)\n+\n+\n+# Tests special weights (1D)\n @testing.parameterize(*testing.product({\n- 'ndim': [2, 3],\n+ 'filter': ['convolve1d', 'correlate1d',\n+ 'minimum_filter1d', 'maximum_filter1d'],\n+ 'shape': [(3, 3), (3, 3, 3)],\n 'dtype': [numpy.int32, numpy.float64],\n- 'filter': ['convolve1d', 'correlate1d']\n }))\n @testing.gpu\n @testing.with_requires('scipy')\n-class TestConvolve1DAndCorrelate1DSpecialCases(unittest.TestCase):\n+class TestSpecialCases1D(FilterTestCaseBase):\n+ @testing.numpy_cupy_raises(scipy_name='scp', accept_error=RuntimeError)\n+ def test_0_dim(self, xp, scp):\n+ self.ksize = 0\n+ return self._filter(xp, scp)\n \n- def _filter(self, scp, a, w, mode='reflect', origin=0):\n- filter = getattr(scp.ndimage, self.filter)\n- return filter(a, w, mode=mode, origin=origin)\n-\n- def test_weights_with_size_zero_dim(self):\n- a = testing.shaped_random((3, ) * self.ndim, cupy, self.dtype)\n- w = testing.shaped_random((0, 3), cupy, self.dtype)\n- with self.assertRaises(RuntimeError):\n- self._filter(cupyx.scipy, a, w)\n-\n- def test_invalid_shape_weights(self):\n- a = testing.shaped_random((3, ) * self.ndim, cupy, self.dtype)\n- w = testing.shaped_random((3, 3), cupy, self.dtype)\n- with self.assertRaises(RuntimeError):\n- self._filter(cupyx.scipy, a, w)\n- w = testing.shaped_random((0, ), cupy,\n- self.dtype)\n- with self.assertRaises(RuntimeError):\n- self._filter(cupyx.scipy, a, w)\n-\n- def test_invalid_mode(self):\n- a = testing.shaped_random((3, ) * self.ndim, cupy, self.dtype)\n- w = testing.shaped_random((3,), cupy, self.dtype)\n- with self.assertRaises(RuntimeError):\n- self._filter(cupyx.scipy, a, w, mode='unknown')\n-\n- # SciPy behavior fixed in 1.2.0: https://github.com/scipy/scipy/issues/822\n- @testing.with_requires('scipy>=1.2.0')\n- def test_invalid_origin(self):\n- a = testing.shaped_random((3, ) * self.ndim, cupy, self.dtype)\n- for lenw in [3, 4]:\n- w = testing.shaped_random((lenw, ), cupy, self.dtype)\n- for origin in range(-3, 4):\n- if (lenw // 2 + origin < 0) or (lenw // 2 + origin >= lenw):\n- with self.assertRaises(ValueError):\n- self._filter(cupyx.scipy, a, w, origin=origin)\n- else:\n- self._filter(cupyx.scipy, a, w, origin=origin)\n-\n-\n-# ######### Testing minimum_filter and maximum_filter ##########\n \n+# Tests invalid axis value\n @testing.parameterize(*testing.product({\n- 'size': [3, 4],\n- 'footprint': [None, 'random'],\n- 'mode': ['reflect', 'constant', 'nearest', 'mirror', 'wrap'],\n- 'origin': [0, None],\n- 'x_dtype': [numpy.int32, numpy.float32],\n- 'output': [None, numpy.float64],\n- 'filter': ['minimum_filter', 'maximum_filter']\n+ 'filter': ['convolve1d', 'correlate1d',\n+ 'minimum_filter1d', 'maximum_filter1d'],\n+ 'shape': [(4, 5), (3, 4, 5), (1, 3, 4, 5)],\n }))\n @testing.gpu\n @testing.with_requires('scipy')\n-class TestMinimumMaximumFilter(unittest.TestCase):\n-\n- shape = (4, 5)\n- cval = 0.0\n-\n- def _filter(self, xp, scp, x):\n- filter = getattr(scp.ndimage, self.filter)\n- if self.origin is None:\n- origin = (-1, 1, -1, 1)[:x.ndim]\n- else:\n- origin = self.origin\n- if self.footprint is None:\n- size, footprint = self.size, None\n- else:\n- size = None\n- shape = (self.size, ) * x.ndim\n- footprint = testing.shaped_random(shape, xp, scale=1) > .5\n- if not footprint.any():\n- footprint = xp.ones(shape)\n- return filter(x, size=size, footprint=footprint,\n- output=self.output, mode=self.mode, cval=self.cval,\n- origin=origin)\n-\n- @testing.numpy_cupy_allclose(atol=1e-5, rtol=1e-5, scipy_name='scp')\n- def test_minimum_and_maximum_filter(self, xp, scp):\n- x = testing.shaped_random(self.shape, xp, self.x_dtype)\n- return self._filter(xp, scp, x)\n-\n-\n-# ######### Testing minimum_filter1d and maximum_filter1d ##########\n-\n-\[email protected](*(\n- testing.product({\n- 'shape': [(3, 4), (2, 3, 4), (1, 2, 3, 4)],\n- 'ksize': [3, 4],\n- 'axis': [0, 1, -1],\n- 'mode': ['reflect'],\n- 'cval': [0.0],\n- 'origin': [0, 1, -1],\n- 'wdtype': [numpy.int32, numpy.float64],\n- 'output': [None, numpy.int32, numpy.float64],\n- 'filter': ['minimum_filter1d', 'maximum_filter1d']\n- }) + testing.product({\n- 'shape': [(3, 4), (2, 3, 4), (1, 2, 3, 4)],\n- 'ksize': [3, 4],\n- 'axis': [0, 1, -1],\n- 'mode': ['constant'],\n- 'cval': [-1.0, 0.0, 1.0],\n- 'origin': [0],\n- 'wdtype': [numpy.int32, numpy.float64],\n- 'output': [None],\n- 'filter': ['minimum_filter1d', 'maximum_filter1d']\n- }) + testing.product({\n- 'shape': [(3, 4), (2, 3, 4), (1, 2, 3, 4)],\n- 'ksize': [3, 4],\n- 'axis': [0, 1, -1],\n- 'mode': ['nearest', 'mirror', 'wrap'],\n- 'cval': [0.0],\n- 'origin': [0],\n- 'wdtype': [numpy.int32, numpy.float64],\n- 'output': [None],\n- 'filter': ['minimum_filter1d', 'maximum_filter1d']\n- })\n-))\n+class TestInvalidAxis(FilterTestCaseBase):\n+ @testing.numpy_cupy_raises(scipy_name='scp', accept_error=ValueError)\n+ def test_invalid_axis_pos(self, xp, scp):\n+ self.axis = len(self.shape)\n+ try:\n+ return self._filter(xp, scp)\n+ except numpy.AxisError:\n+ # numpy.AxisError is a subclass of ValueError\n+ # currently cupyx is raising numpy.AxisError but scipy is still\n+ # raising ValueError\n+ raise ValueError('invalid axis')\n+\n+ @testing.numpy_cupy_raises(scipy_name='scp', accept_error=ValueError)\n+ def test_invalid_axis_neg(self, xp, scp):\n+ self.axis = -len(self.shape) - 1\n+ try:\n+ return self._filter(xp, scp)\n+ except numpy.AxisError:\n+ raise ValueError('invalid axis')\n+\n+\n+# Tests invalid mode value\[email protected](*testing.product({\n+ 'filter': ['convolve', 'correlate',\n+ 'convolve1d', 'correlate1d',\n+ 'minimum_filter', 'maximum_filter',\n+ 'minimum_filter1d', 'maximum_filter1d'],\n+ 'mode': ['unknown'],\n+ 'shape': [(4, 5)],\n+}))\n @testing.gpu\n @testing.with_requires('scipy')\n-class TestMinimumMaximum1DFilter(unittest.TestCase):\n- def _filter(self, xp, scp, a, w):\n- filter = getattr(scp.ndimage, self.filter)\n- return filter(a, w, axis=self.axis, output=self.output, mode=self.mode,\n- cval=self.cval, origin=self.origin)\n+class TestInvalidMode(FilterTestCaseBase):\n+ @testing.numpy_cupy_raises(scipy_name='scp', accept_error=RuntimeError)\n+ def test_invalid_mode(self, xp, scp):\n+ return self._filter(xp, scp)\n \n- @testing.numpy_cupy_allclose(atol=1e-5, rtol=1e-5, scipy_name='scp')\n- def test_convolve1d_and_correlate1d(self, xp, scp):\n- a = testing.shaped_random(self.shape, xp, self.x_dtype)\n- w = testing.shaped_random((self.ksize,), xp, self.x_dtype)\n- return self._filter(xp, scp, a, w)\n+\n+# Tests invalid origin values\[email protected](*testing.product({\n+ 'filter': ['convolve', 'correlate',\n+ 'convolve1d', 'correlate1d',\n+ 'minimum_filter', 'maximum_filter',\n+ 'minimum_filter1d', 'maximum_filter1d'],\n+ 'ksize': [3, 4],\n+ 'shape': [(4, 5)], 'dtype': [numpy.float64],\n+}))\[email protected]\n+# SciPy behavior fixed in 1.2.0: https://github.com/scipy/scipy/issues/822\[email protected]_requires('scipy>=1.2.0')\n+class TestInvalidOrigin(FilterTestCaseBase):\n+ @testing.numpy_cupy_raises(scipy_name='scp', accept_error=ValueError)\n+ def test_invalid_origin_neg(self, xp, scp):\n+ self.origin = -self.ksize // 2 - 1\n+ return self._filter(xp, scp)\n+\n+ @testing.numpy_cupy_raises(scipy_name='scp', accept_error=ValueError)\n+ def test_invalid_origin_pos(self, xp, scp):\n+ self.origin = self.ksize - self.ksize // 2\n+ return self._filter(xp, scp)\n", "new_path": "tests/cupyx_tests/scipy_tests/ndimage_tests/test_filters.py", "old_path": "tests/cupyx_tests/scipy_tests/ndimage_tests/test_filters.py" } ]
dad51485282b6e05c4993b0733bd54aa3c0bacef
cupy/cupy
12.01.2021 16:21:46
MIT License
Use "import numpy as np" in the array_api submodule This avoids importing everything inside the individual functions, but still is preferred over importing the functions used explicitly, as most of them clash with the wrapper function names.
[ { "change_type": "MODIFY", "diff": "@@ -1,76 +1,67 @@\n+import numpy as np\n+\n def arange(start, /, *, stop=None, step=1, dtype=None, device=None):\n- from .. import arange\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return arange(start, stop=stop, step=step, dtype=dtype)\n+ return np.arange(start, stop=stop, step=step, dtype=dtype)\n \n def empty(shape, /, *, dtype=None, device=None):\n- from .. import empty\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return empty(shape, dtype=dtype)\n+ return np.empty(shape, dtype=dtype)\n \n def empty_like(x, /, *, dtype=None, device=None):\n- from .. import empty_like\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return empty_like(x, dtype=dtype)\n+ return np.empty_like(x, dtype=dtype)\n \n def eye(N, /, *, M=None, k=0, dtype=None, device=None):\n- from .. import eye\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return eye(N, M=M, k=k, dtype=dtype)\n+ return np.eye(N, M=M, k=k, dtype=dtype)\n \n def full(shape, fill_value, /, *, dtype=None, device=None):\n- from .. import full\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return full(shape, fill_value, dtype=dtype)\n+ return np.full(shape, fill_value, dtype=dtype)\n \n def full_like(x, fill_value, /, *, dtype=None, device=None):\n- from .. import full_like\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return full_like(x, fill_value, dtype=dtype)\n+ return np.full_like(x, fill_value, dtype=dtype)\n \n def linspace(start, stop, num, /, *, dtype=None, device=None, endpoint=True):\n- from .. import linspace\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return linspace(start, stop, num, dtype=dtype, endpoint=endpoint)\n+ return np.linspace(start, stop, num, dtype=dtype, endpoint=endpoint)\n \n def ones(shape, /, *, dtype=None, device=None):\n- from .. import ones\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return ones(shape, dtype=dtype)\n+ return np.ones(shape, dtype=dtype)\n \n def ones_like(x, /, *, dtype=None, device=None):\n- from .. import ones_like\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return ones_like(x, dtype=dtype)\n+ return np.ones_like(x, dtype=dtype)\n \n def zeros(shape, /, *, dtype=None, device=None):\n- from .. import zeros\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return zeros(shape, dtype=dtype)\n+ return np.zeros(shape, dtype=dtype)\n \n def zeros_like(x, /, *, dtype=None, device=None):\n- from .. import zeros_like\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return zeros_like(x, dtype=dtype)\n+ return np.zeros_like(x, dtype=dtype)\n", "new_path": "numpy/_array_api/_creation_functions.py", "old_path": "numpy/_array_api/_creation_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -1,230 +1,177 @@\n+import numpy as np\n+\n def abs(x, /):\n- from .. import abs\n- return abs(x)\n+ return np.abs(x)\n \n def acos(x, /):\n # Note: the function name is different here\n- from .. import arccos\n- return arccos(x)\n+ return np.arccos(x)\n \n def acosh(x, /):\n # Note: the function name is different here\n- from .. import arccosh\n- return arccosh(x)\n+ return np.arccosh(x)\n \n def add(x1, x2, /):\n- from .. import add\n- return add(x1, x2)\n+ return np.add(x1, x2)\n \n def asin(x, /):\n # Note: the function name is different here\n- from .. import arcsin\n- return arcsin(x)\n+ return np.arcsin(x)\n \n def asinh(x, /):\n # Note: the function name is different here\n- from .. import arcsinh\n- return arcsinh(x)\n+ return np.arcsinh(x)\n \n def atan(x, /):\n # Note: the function name is different here\n- from .. import arctan\n- return arctan(x)\n+ return np.arctan(x)\n \n def atan2(x1, x2, /):\n # Note: the function name is different here\n- from .. import arctan2\n- return arctan2(x1, x2)\n+ return np.arctan2(x1, x2)\n \n def atanh(x, /):\n # Note: the function name is different here\n- from .. import arctanh\n- return arctanh(x)\n+ return np.arctanh(x)\n \n def bitwise_and(x1, x2, /):\n- from .. import bitwise_and\n- return bitwise_and(x1, x2)\n+ return np.bitwise_and(x1, x2)\n \n def bitwise_left_shift(x1, x2, /):\n # Note: the function name is different here\n- from .. import left_shift\n- return left_shift(x1, x2)\n+ return np.left_shift(x1, x2)\n \n def bitwise_invert(x, /):\n # Note: the function name is different here\n- from .. import invert\n- return invert(x)\n+ return np.invert(x)\n \n def bitwise_or(x1, x2, /):\n- from .. import bitwise_or\n- return bitwise_or(x1, x2)\n+ return np.bitwise_or(x1, x2)\n \n def bitwise_right_shift(x1, x2, /):\n # Note: the function name is different here\n- from .. import right_shift\n- return right_shift(x1, x2)\n+ return np.right_shift(x1, x2)\n \n def bitwise_xor(x1, x2, /):\n- from .. import bitwise_xor\n- return bitwise_xor(x1, x2)\n+ return np.bitwise_xor(x1, x2)\n \n def ceil(x, /):\n- from .. import ceil\n- return ceil(x)\n+ return np.ceil(x)\n \n def cos(x, /):\n- from .. import cos\n- return cos(x)\n+ return np.cos(x)\n \n def cosh(x, /):\n- from .. import cosh\n- return cosh(x)\n+ return np.cosh(x)\n \n def divide(x1, x2, /):\n- from .. import divide\n- return divide(x1, x2)\n+ return np.divide(x1, x2)\n \n def equal(x1, x2, /):\n- from .. import equal\n- return equal(x1, x2)\n+ return np.equal(x1, x2)\n \n def exp(x, /):\n- from .. import exp\n- return exp(x)\n+ return np.exp(x)\n \n def expm1(x, /):\n- from .. import expm1\n- return expm1(x)\n+ return np.expm1(x)\n \n def floor(x, /):\n- from .. import floor\n- return floor(x)\n+ return np.floor(x)\n \n def floor_divide(x1, x2, /):\n- from .. import floor_divide\n- return floor_divide(x1, x2)\n+ return np.floor_divide(x1, x2)\n \n def greater(x1, x2, /):\n- from .. import greater\n- return greater(x1, x2)\n+ return np.greater(x1, x2)\n \n def greater_equal(x1, x2, /):\n- from .. import greater_equal\n- return greater_equal(x1, x2)\n+ return np.greater_equal(x1, x2)\n \n def isfinite(x, /):\n- from .. import isfinite\n- return isfinite(x)\n+ return np.isfinite(x)\n \n def isinf(x, /):\n- from .. import isinf\n- return isinf(x)\n+ return np.isinf(x)\n \n def isnan(x, /):\n- from .. import isnan\n- return isnan(x)\n+ return np.isnan(x)\n \n def less(x1, x2, /):\n- from .. import less\n- return less(x1, x2)\n+ return np.less(x1, x2)\n \n def less_equal(x1, x2, /):\n- from .. import less_equal\n- return less_equal(x1, x2)\n+ return np.less_equal(x1, x2)\n \n def log(x, /):\n- from .. import log\n- return log(x)\n+ return np.log(x)\n \n def log1p(x, /):\n- from .. import log1p\n- return log1p(x)\n+ return np.log1p(x)\n \n def log2(x, /):\n- from .. import log2\n- return log2(x)\n+ return np.log2(x)\n \n def log10(x, /):\n- from .. import log10\n- return log10(x)\n+ return np.log10(x)\n \n def logical_and(x1, x2, /):\n- from .. import logical_and\n- return logical_and(x1, x2)\n+ return np.logical_and(x1, x2)\n \n def logical_not(x, /):\n- from .. import logical_not\n- return logical_not(x)\n+ return np.logical_not(x)\n \n def logical_or(x1, x2, /):\n- from .. import logical_or\n- return logical_or(x1, x2)\n+ return np.logical_or(x1, x2)\n \n def logical_xor(x1, x2, /):\n- from .. import logical_xor\n- return logical_xor(x1, x2)\n+ return np.logical_xor(x1, x2)\n \n def multiply(x1, x2, /):\n- from .. import multiply\n- return multiply(x1, x2)\n+ return np.multiply(x1, x2)\n \n def negative(x, /):\n- from .. import negative\n- return negative(x)\n+ return np.negative(x)\n \n def not_equal(x1, x2, /):\n- from .. import not_equal\n- return not_equal(x1, x2)\n+ return np.not_equal(x1, x2)\n \n def positive(x, /):\n- from .. import positive\n- return positive(x)\n+ return np.positive(x)\n \n def pow(x1, x2, /):\n # Note: the function name is different here\n- from .. import power\n- return power(x1, x2)\n+ return np.power(x1, x2)\n \n def remainder(x1, x2, /):\n- from .. import remainder\n- return remainder(x1, x2)\n+ return np.remainder(x1, x2)\n \n def round(x, /):\n- from .. import round\n- return round(x)\n+ return np.round(x)\n \n def sign(x, /):\n- from .. import sign\n- return sign(x)\n+ return np.sign(x)\n \n def sin(x, /):\n- from .. import sin\n- return sin(x)\n+ return np.sin(x)\n \n def sinh(x, /):\n- from .. import sinh\n- return sinh(x)\n+ return np.sinh(x)\n \n def square(x, /):\n- from .. import square\n- return square(x)\n+ return np.square(x)\n \n def sqrt(x, /):\n- from .. import sqrt\n- return sqrt(x)\n+ return np.sqrt(x)\n \n def subtract(x1, x2, /):\n- from .. import subtract\n- return subtract(x1, x2)\n+ return np.subtract(x1, x2)\n \n def tan(x, /):\n- from .. import tan\n- return tan(x)\n+ return np.tan(x)\n \n def tanh(x, /):\n- from .. import tanh\n- return tanh(x)\n+ return np.tanh(x)\n \n def trunc(x, /):\n- from .. import trunc\n- return trunc(x)\n+ return np.trunc(x)\n", "new_path": "numpy/_array_api/_elementwise_functions.py", "old_path": "numpy/_array_api/_elementwise_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -1,93 +1,73 @@\n+import numpy as np\n+\n # def cholesky():\n-# from .. import cholesky\n-# return cholesky()\n+# return np.cholesky()\n \n def cross(x1, x2, /, *, axis=-1):\n- from .. import cross\n- return cross(x1, x2, axis=axis)\n+ return np.cross(x1, x2, axis=axis)\n \n def det(x, /):\n # Note: this function is being imported from a nondefault namespace\n- from ..linalg import det\n- return det(x)\n+ return np.det(x)\n \n def diagonal(x, /, *, axis1=0, axis2=1, offset=0):\n- from .. import diagonal\n- return diagonal(x, axis1=axis1, axis2=axis2, offset=offset)\n+ return np.diagonal(x, axis1=axis1, axis2=axis2, offset=offset)\n \n # def dot():\n-# from .. import dot\n-# return dot()\n+# return np.dot()\n #\n # def eig():\n-# from .. import eig\n-# return eig()\n+# return np.eig()\n #\n # def eigvalsh():\n-# from .. import eigvalsh\n-# return eigvalsh()\n+# return np.eigvalsh()\n #\n # def einsum():\n-# from .. import einsum\n-# return einsum()\n+# return np.einsum()\n \n def inv(x):\n # Note: this function is being imported from a nondefault namespace\n- from ..linalg import inv\n- return inv(x)\n+ return np.inv(x)\n \n # def lstsq():\n-# from .. import lstsq\n-# return lstsq()\n+# return np.lstsq()\n #\n # def matmul():\n-# from .. import matmul\n-# return matmul()\n+# return np.matmul()\n #\n # def matrix_power():\n-# from .. import matrix_power\n-# return matrix_power()\n+# return np.matrix_power()\n #\n # def matrix_rank():\n-# from .. import matrix_rank\n-# return matrix_rank()\n+# return np.matrix_rank()\n \n def norm(x, /, *, axis=None, keepdims=False, ord=None):\n # Note: this function is being imported from a nondefault namespace\n- from ..linalg import norm\n # Note: this is different from the default behavior\n if axis == None and x.ndim > 2:\n x = x.flatten()\n- return norm(x, axis=axis, keepdims=keepdims, ord=ord)\n+ return np.norm(x, axis=axis, keepdims=keepdims, ord=ord)\n \n def outer(x1, x2, /):\n- from .. import outer\n- return outer(x1, x2)\n+ return np.outer(x1, x2)\n \n # def pinv():\n-# from .. import pinv\n-# return pinv()\n+# return np.pinv()\n #\n # def qr():\n-# from .. import qr\n-# return qr()\n+# return np.qr()\n #\n # def slogdet():\n-# from .. import slogdet\n-# return slogdet()\n+# return np.slogdet()\n #\n # def solve():\n-# from .. import solve\n-# return solve()\n+# return np.solve()\n #\n # def svd():\n-# from .. import svd\n-# return svd()\n+# return np.svd()\n \n def trace(x, /, *, axis1=0, axis2=1, offset=0):\n- from .. import trace\n- return trace(x, axis1=axis1, axis2=axis2, offset=offset)\n+ return np.trace(x, axis1=axis1, axis2=axis2, offset=offset)\n \n def transpose(x, /, *, axes=None):\n- from .. import transpose\n- return transpose(x, axes=axes)\n+ return np.transpose(x, axes=axes)\n", "new_path": "numpy/_array_api/_linear_algebra_functions.py", "old_path": "numpy/_array_api/_linear_algebra_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -1,28 +1,23 @@\n+import numpy as np\n+\n def concat(arrays, /, *, axis=0):\n # Note: the function name is different here\n- from .. import concatenate\n- return concatenate(arrays, axis=axis)\n+ return np.concatenate(arrays, axis=axis)\n \n def expand_dims(x, axis, /):\n- from .. import expand_dims\n- return expand_dims(x, axis)\n+ return np.expand_dims(x, axis)\n \n def flip(x, /, *, axis=None):\n- from .. import flip\n- return flip(x, axis=axis)\n+ return np.flip(x, axis=axis)\n \n def reshape(x, shape, /):\n- from .. import reshape\n- return reshape(x, shape)\n+ return np.reshape(x, shape)\n \n def roll(x, shift, /, *, axis=None):\n- from .. import roll\n- return roll(x, shift, axis=axis)\n+ return np.roll(x, shift, axis=axis)\n \n def squeeze(x, /, *, axis=None):\n- from .. import squeeze\n- return squeeze(x, axis=axis)\n+ return np.squeeze(x, axis=axis)\n \n def stack(arrays, /, *, axis=0):\n- from .. import stack\n- return stack(arrays, axis=axis)\n+ return np.stack(arrays, axis=axis)\n", "new_path": "numpy/_array_api/_manipulation_functions.py", "old_path": "numpy/_array_api/_manipulation_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -1,15 +1,13 @@\n+import numpy as np\n+\n def argmax(x, /, *, axis=None, keepdims=False):\n- from .. import argmax\n- return argmax(x, axis=axis, keepdims=keepdims)\n+ return np.argmax(x, axis=axis, keepdims=keepdims)\n \n def argmin(x, /, *, axis=None, keepdims=False):\n- from .. import argmin\n- return argmin(x, axis=axis, keepdims=keepdims)\n+ return np.argmin(x, axis=axis, keepdims=keepdims)\n \n def nonzero(x, /):\n- from .. import nonzero\n- return nonzero(x)\n+ return np.nonzero(x)\n \n def where(condition, x1, x2, /):\n- from .. import where\n- return where(condition, x1, x2)\n+ return np.where(condition, x1, x2)\n", "new_path": "numpy/_array_api/_searching_functions.py", "old_path": "numpy/_array_api/_searching_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -1,3 +1,4 @@\n+import numpy as np\n+\n def unique(x, /, *, return_counts=False, return_index=False, return_inverse=False, sorted=True):\n- from .. import unique\n- return unique(x, return_counts=return_counts, return_index=return_index, return_inverse=return_inverse, sorted=sorted)\n+ return np.unique(x, return_counts=return_counts, return_index=return_index, return_inverse=return_inverse, sorted=sorted)\n", "new_path": "numpy/_array_api/_set_functions.py", "old_path": "numpy/_array_api/_set_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -1,19 +1,17 @@\n+import numpy as np\n+\n def argsort(x, /, *, axis=-1, descending=False, stable=True):\n- from .. import argsort\n- from .. import flip\n # Note: this keyword argument is different, and the default is different.\n kind = 'stable' if stable else 'quicksort'\n- res = argsort(x, axis=axis, kind=kind)\n+ res = np.argsort(x, axis=axis, kind=kind)\n if descending:\n- res = flip(res, axis=axis)\n+ res = np.flip(res, axis=axis)\n return res\n \n def sort(x, /, *, axis=-1, descending=False, stable=True):\n- from .. import sort\n- from .. import flip\n # Note: this keyword argument is different, and the default is different.\n kind = 'stable' if stable else 'quicksort'\n- res = sort(x, axis=axis, kind=kind)\n+ res = np.sort(x, axis=axis, kind=kind)\n if descending:\n- res = flip(res, axis=axis)\n+ res = np.flip(res, axis=axis)\n return res\n", "new_path": "numpy/_array_api/_sorting_functions.py", "old_path": "numpy/_array_api/_sorting_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -1,29 +1,24 @@\n+import numpy as np\n+\n def max(x, /, *, axis=None, keepdims=False):\n- from .. import max\n- return max(x, axis=axis, keepdims=keepdims)\n+ return np.max(x, axis=axis, keepdims=keepdims)\n \n def mean(x, /, *, axis=None, keepdims=False):\n- from .. import mean\n- return mean(x, axis=axis, keepdims=keepdims)\n+ return np.mean(x, axis=axis, keepdims=keepdims)\n \n def min(x, /, *, axis=None, keepdims=False):\n- from .. import min\n- return min(x, axis=axis, keepdims=keepdims)\n+ return np.min(x, axis=axis, keepdims=keepdims)\n \n def prod(x, /, *, axis=None, keepdims=False):\n- from .. import prod\n- return prod(x, axis=axis, keepdims=keepdims)\n+ return np.prod(x, axis=axis, keepdims=keepdims)\n \n def std(x, /, *, axis=None, correction=0.0, keepdims=False):\n- from .. import std\n # Note: the keyword argument correction is different here\n- return std(x, axis=axis, ddof=correction, keepdims=keepdims)\n+ return np.std(x, axis=axis, ddof=correction, keepdims=keepdims)\n \n def sum(x, /, *, axis=None, keepdims=False):\n- from .. import sum\n- return sum(x, axis=axis, keepdims=keepdims)\n+ return np.sum(x, axis=axis, keepdims=keepdims)\n \n def var(x, /, *, axis=None, correction=0.0, keepdims=False):\n- from .. import var\n # Note: the keyword argument correction is different here\n- return var(x, axis=axis, ddof=correction, keepdims=keepdims)\n+ return np.var(x, axis=axis, ddof=correction, keepdims=keepdims)\n", "new_path": "numpy/_array_api/_statistical_functions.py", "old_path": "numpy/_array_api/_statistical_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -1,7 +1,7 @@\n+import numpy as np\n+\n def all(x, /, *, axis=None, keepdims=False):\n- from .. import all\n- return all(x, axis=axis, keepdims=keepdims)\n+ return np.all(x, axis=axis, keepdims=keepdims)\n \n def any(x, /, *, axis=None, keepdims=False):\n- from .. import any\n- return any(x, axis=axis, keepdims=keepdims)\n+ return np.any(x, axis=axis, keepdims=keepdims)\n", "new_path": "numpy/_array_api/_utility_functions.py", "old_path": "numpy/_array_api/_utility_functions.py" } ]
76eb888612183768d9e1b0c818fcf5416c5f28c7
cupy/cupy
20.01.2021 18:25:20
MIT License
Use _implementation on all functions that have it in the array API submodule That way they only work on actual ndarray inputs, not array-like, which is more inline with the spec.
[ { "change_type": "MODIFY", "diff": "@@ -35,7 +35,7 @@ def empty_like(x: array, /, *, dtype: Optional[dtype] = None, device: Optional[d\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return np.empty_like(x, dtype=dtype)\n+ return np.empty_like._implementation(x, dtype=dtype)\n \n def eye(N: int, /, *, M: Optional[int] = None, k: Optional[int] = 0, dtype: Optional[dtype] = None, device: Optional[device] = None) -> array:\n \"\"\"\n@@ -68,7 +68,7 @@ def full_like(x: array, fill_value: Union[int, float], /, *, dtype: Optional[dty\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return np.full_like(x, fill_value, dtype=dtype)\n+ return np.full_like._implementation(x, fill_value, dtype=dtype)\n \n def linspace(start: Union[int, float], stop: Union[int, float], num: int, /, *, dtype: Optional[dtype] = None, device: Optional[device] = None, endpoint: bool = True) -> array:\n \"\"\"\n@@ -101,7 +101,7 @@ def ones_like(x: array, /, *, dtype: Optional[dtype] = None, device: Optional[de\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return np.ones_like(x, dtype=dtype)\n+ return np.ones_like._implementation(x, dtype=dtype)\n \n def zeros(shape: Union[int, Tuple[int, ...]], /, *, dtype: Optional[dtype] = None, device: Optional[device] = None) -> array:\n \"\"\"\n@@ -123,4 +123,4 @@ def zeros_like(x: array, /, *, dtype: Optional[dtype] = None, device: Optional[d\n if device is not None:\n # Note: Device support is not yet implemented on ndarray\n raise NotImplementedError(\"Device support is not yet implemented\")\n- return np.zeros_like(x, dtype=dtype)\n+ return np.zeros_like._implementation(x, dtype=dtype)\n", "new_path": "numpy/_array_api/_creation_functions.py", "old_path": "numpy/_array_api/_creation_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -381,7 +381,7 @@ def round(x: array, /) -> array:\n \n See its docstring for more information.\n \"\"\"\n- return np.round(x)\n+ return np.round._implementation(x)\n \n def sign(x: array, /) -> array:\n \"\"\"\n", "new_path": "numpy/_array_api/_elementwise_functions.py", "old_path": "numpy/_array_api/_elementwise_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -18,7 +18,7 @@ def cross(x1: array, x2: array, /, *, axis: int = -1) -> array:\n \n See its docstring for more information.\n \"\"\"\n- return np.cross(x1, x2, axis=axis)\n+ return np.cross._implementation(x1, x2, axis=axis)\n \n def det(x: array, /) -> array:\n \"\"\"\n@@ -35,7 +35,7 @@ def diagonal(x: array, /, *, axis1: int = 0, axis2: int = 1, offset: int = 0) ->\n \n See its docstring for more information.\n \"\"\"\n- return np.diagonal(x, axis1=axis1, axis2=axis2, offset=offset)\n+ return np.diagonal._implementation(x, axis1=axis1, axis2=axis2, offset=offset)\n \n # def dot():\n # \"\"\"\n@@ -128,7 +128,7 @@ def outer(x1: array, x2: array, /) -> array:\n \n See its docstring for more information.\n \"\"\"\n- return np.outer(x1, x2)\n+ return np.outer._implementation(x1, x2)\n \n # def pinv():\n # \"\"\"\n@@ -176,7 +176,7 @@ def trace(x: array, /, *, axis1: int = 0, axis2: int = 1, offset: int = 0) -> ar\n \n See its docstring for more information.\n \"\"\"\n- return np.asarray(np.trace(x, axis1=axis1, axis2=axis2, offset=offset))\n+ return np.asarray(np.trace._implementation(x, axis1=axis1, axis2=axis2, offset=offset))\n \n def transpose(x: array, /, *, axes: Optional[Tuple[int, ...]] = None) -> array:\n \"\"\"\n@@ -184,4 +184,4 @@ def transpose(x: array, /, *, axes: Optional[Tuple[int, ...]] = None) -> array:\n \n See its docstring for more information.\n \"\"\"\n- return np.transpose(x, axes=axes)\n+ return np.transpose._implementation(x, axes=axes)\n", "new_path": "numpy/_array_api/_linear_algebra_functions.py", "old_path": "numpy/_array_api/_linear_algebra_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -19,7 +19,7 @@ def expand_dims(x: array, axis: int, /) -> array:\n \n See its docstring for more information.\n \"\"\"\n- return np.expand_dims(x, axis)\n+ return np.expand_dims._implementation(x, axis)\n \n def flip(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None) -> array:\n \"\"\"\n@@ -27,7 +27,7 @@ def flip(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None) ->\n \n See its docstring for more information.\n \"\"\"\n- return np.flip(x, axis=axis)\n+ return np.flip._implementation(x, axis=axis)\n \n def reshape(x: array, shape: Tuple[int, ...], /) -> array:\n \"\"\"\n@@ -35,7 +35,7 @@ def reshape(x: array, shape: Tuple[int, ...], /) -> array:\n \n See its docstring for more information.\n \"\"\"\n- return np.reshape(x, shape)\n+ return np.reshape._implementation(x, shape)\n \n def roll(x: array, shift: Union[int, Tuple[int, ...]], /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None) -> array:\n \"\"\"\n@@ -43,7 +43,7 @@ def roll(x: array, shift: Union[int, Tuple[int, ...]], /, *, axis: Optional[Unio\n \n See its docstring for more information.\n \"\"\"\n- return np.roll(x, shift, axis=axis)\n+ return np.roll._implementation(x, shift, axis=axis)\n \n def squeeze(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None) -> array:\n \"\"\"\n@@ -51,7 +51,7 @@ def squeeze(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None)\n \n See its docstring for more information.\n \"\"\"\n- return np.squeeze(x, axis=axis)\n+ return np.squeeze._implementation(x, axis=axis)\n \n def stack(arrays: Tuple[array], /, *, axis: int = 0) -> array:\n \"\"\"\n@@ -59,4 +59,4 @@ def stack(arrays: Tuple[array], /, *, axis: int = 0) -> array:\n \n See its docstring for more information.\n \"\"\"\n- return np.stack(arrays, axis=axis)\n+ return np.stack._implementation(arrays, axis=axis)\n", "new_path": "numpy/_array_api/_manipulation_functions.py", "old_path": "numpy/_array_api/_manipulation_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -11,7 +11,7 @@ def argmax(x: array, /, *, axis: int = None, keepdims: bool = False) -> array:\n See its docstring for more information.\n \"\"\"\n # Note: this currently fails as np.argmax does not implement keepdims\n- return np.asarray(np.argmax(x, axis=axis, keepdims=keepdims))\n+ return np.asarray(np.argmax._implementation(x, axis=axis, keepdims=keepdims))\n \n def argmin(x: array, /, *, axis: int = None, keepdims: bool = False) -> array:\n \"\"\"\n@@ -20,7 +20,7 @@ def argmin(x: array, /, *, axis: int = None, keepdims: bool = False) -> array:\n See its docstring for more information.\n \"\"\"\n # Note: this currently fails as np.argmin does not implement keepdims\n- return np.asarray(np.argmin(x, axis=axis, keepdims=keepdims))\n+ return np.asarray(np.argmin._implementation(x, axis=axis, keepdims=keepdims))\n \n def nonzero(x: array, /) -> Tuple[array, ...]:\n \"\"\"\n@@ -28,7 +28,7 @@ def nonzero(x: array, /) -> Tuple[array, ...]:\n \n See its docstring for more information.\n \"\"\"\n- return np.nonzero(x)\n+ return np.nonzero._implementation(x)\n \n def where(condition: array, x1: array, x2: array, /) -> array:\n \"\"\"\n@@ -36,4 +36,4 @@ def where(condition: array, x1: array, x2: array, /) -> array:\n \n See its docstring for more information.\n \"\"\"\n- return np.where(condition, x1, x2)\n+ return np.where._implementation(condition, x1, x2)\n", "new_path": "numpy/_array_api/_searching_functions.py", "old_path": "numpy/_array_api/_searching_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -10,4 +10,4 @@ def unique(x: array, /, *, return_counts: bool = False, return_index: bool = Fal\n \n See its docstring for more information.\n \"\"\"\n- return np.unique(x, return_counts=return_counts, return_index=return_index, return_inverse=return_inverse, sorted=sorted)\n+ return np.unique._implementation(x, return_counts=return_counts, return_index=return_index, return_inverse=return_inverse, sorted=sorted)\n", "new_path": "numpy/_array_api/_set_functions.py", "old_path": "numpy/_array_api/_set_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -12,7 +12,7 @@ def argsort(x: array, /, *, axis: int = -1, descending: bool = False, stable: bo\n \"\"\"\n # Note: this keyword argument is different, and the default is different.\n kind = 'stable' if stable else 'quicksort'\n- res = np.argsort(x, axis=axis, kind=kind)\n+ res = np.argsort._implementation(x, axis=axis, kind=kind)\n if descending:\n res = np.flip(res, axis=axis)\n return res\n@@ -25,7 +25,7 @@ def sort(x: array, /, *, axis: int = -1, descending: bool = False, stable: bool\n \"\"\"\n # Note: this keyword argument is different, and the default is different.\n kind = 'stable' if stable else 'quicksort'\n- res = np.sort(x, axis=axis, kind=kind)\n+ res = np.sort._implementation(x, axis=axis, kind=kind)\n if descending:\n res = np.flip(res, axis=axis)\n return res\n", "new_path": "numpy/_array_api/_sorting_functions.py", "old_path": "numpy/_array_api/_sorting_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -5,24 +5,24 @@ from ._types import Optional, Tuple, Union, array\n import numpy as np\n \n def max(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None, keepdims: bool = False) -> array:\n- return np.max(x, axis=axis, keepdims=keepdims)\n+ return np.max._implementation(x, axis=axis, keepdims=keepdims)\n \n def mean(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None, keepdims: bool = False) -> array:\n- return np.asarray(np.mean(x, axis=axis, keepdims=keepdims))\n+ return np.asarray(np.mean._implementation(x, axis=axis, keepdims=keepdims))\n \n def min(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None, keepdims: bool = False) -> array:\n- return np.min(x, axis=axis, keepdims=keepdims)\n+ return np.min._implementation(x, axis=axis, keepdims=keepdims)\n \n def prod(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None, keepdims: bool = False) -> array:\n- return np.asarray(np.prod(x, axis=axis, keepdims=keepdims))\n+ return np.asarray(np.prod._implementation(x, axis=axis, keepdims=keepdims))\n \n def std(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None, correction: Union[int, float] = 0.0, keepdims: bool = False) -> array:\n # Note: the keyword argument correction is different here\n- return np.asarray(np.std(x, axis=axis, ddof=correction, keepdims=keepdims))\n+ return np.asarray(np.std._implementation(x, axis=axis, ddof=correction, keepdims=keepdims))\n \n def sum(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None, keepdims: bool = False) -> array:\n- return np.asarray(np.sum(x, axis=axis, keepdims=keepdims))\n+ return np.asarray(np.sum._implementation(x, axis=axis, keepdims=keepdims))\n \n def var(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None, correction: Union[int, float] = 0.0, keepdims: bool = False) -> array:\n # Note: the keyword argument correction is different here\n- return np.asarray(np.var(x, axis=axis, ddof=correction, keepdims=keepdims))\n+ return np.asarray(np.var._implementation(x, axis=axis, ddof=correction, keepdims=keepdims))\n", "new_path": "numpy/_array_api/_statistical_functions.py", "old_path": "numpy/_array_api/_statistical_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -10,7 +10,7 @@ def all(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None, keep\n \n See its docstring for more information.\n \"\"\"\n- return np.asarray(np.all(x, axis=axis, keepdims=keepdims))\n+ return np.asarray(np.all._implementation(x, axis=axis, keepdims=keepdims))\n \n def any(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None, keepdims: bool = False) -> array:\n \"\"\"\n@@ -18,4 +18,4 @@ def any(x: array, /, *, axis: Optional[Union[int, Tuple[int, ...]]] = None, keep\n \n See its docstring for more information.\n \"\"\"\n- return np.asarray(np.any(x, axis=axis, keepdims=keepdims))\n+ return np.asarray(np.any._implementation(x, axis=axis, keepdims=keepdims))\n", "new_path": "numpy/_array_api/_utility_functions.py", "old_path": "numpy/_array_api/_utility_functions.py" } ]
994ce07595026d5de54f52ef5748b578f9fae1bc
cupy/cupy
09.07.2021 13:57:44
MIT License
Use better type signatures in the array API module This includes returning custom dataclasses for finfo and iinfo that only contain the properties required by the array API specification.
[ { "change_type": "MODIFY", "diff": "@@ -396,7 +396,8 @@ class Array:\n res = self._array.__le__(other._array)\n return self.__class__._new(res)\n \n- def __len__(self, /):\n+ # Note: __len__ may end up being removed from the array API spec.\n+ def __len__(self, /) -> int:\n \"\"\"\n Performs the operation __len__.\n \"\"\"\n@@ -843,7 +844,7 @@ class Array:\n return self.__class__._new(res)\n \n @property\n- def dtype(self):\n+ def dtype(self) -> Dtype:\n \"\"\"\n Array API compatible wrapper for :py:meth:`np.ndaray.dtype <numpy.ndarray.dtype>`.\n \n@@ -852,7 +853,7 @@ class Array:\n return self._array.dtype\n \n @property\n- def device(self):\n+ def device(self) -> Device:\n \"\"\"\n Array API compatible wrapper for :py:meth:`np.ndaray.device <numpy.ndarray.device>`.\n \n@@ -862,7 +863,7 @@ class Array:\n raise NotImplementedError(\"The device attribute is not yet implemented\")\n \n @property\n- def ndim(self):\n+ def ndim(self) -> int:\n \"\"\"\n Array API compatible wrapper for :py:meth:`np.ndaray.ndim <numpy.ndarray.ndim>`.\n \n@@ -871,7 +872,7 @@ class Array:\n return self._array.ndim\n \n @property\n- def shape(self):\n+ def shape(self) -> Tuple[int, ...]:\n \"\"\"\n Array API compatible wrapper for :py:meth:`np.ndaray.shape <numpy.ndarray.shape>`.\n \n@@ -880,7 +881,7 @@ class Array:\n return self._array.shape\n \n @property\n- def size(self):\n+ def size(self) -> int:\n \"\"\"\n Array API compatible wrapper for :py:meth:`np.ndaray.size <numpy.ndarray.size>`.\n \n@@ -889,7 +890,7 @@ class Array:\n return self._array.size\n \n @property\n- def T(self):\n+ def T(self) -> Array:\n \"\"\"\n Array API compatible wrapper for :py:meth:`np.ndaray.T <numpy.ndarray.T>`.\n \n", "new_path": "numpy/_array_api/_array_object.py", "old_path": "numpy/_array_api/_array_object.py" }, { "change_type": "MODIFY", "diff": "@@ -10,7 +10,7 @@ from ._dtypes import _all_dtypes\n \n import numpy as np\n \n-def asarray(obj: Union[float, NestedSequence[bool|int|float], SupportsDLPack, SupportsBufferProtocol], /, *, dtype: Optional[Dtype] = None, device: Optional[Device] = None, copy: Optional[bool] = None) -> Array:\n+def asarray(obj: Union[Array, float, NestedSequence[bool|int|float], SupportsDLPack, SupportsBufferProtocol], /, *, dtype: Optional[Dtype] = None, device: Optional[Device] = None, copy: Optional[bool] = None) -> Array:\n \"\"\"\n Array API compatible wrapper for :py:func:`np.asarray <numpy.asarray>`.\n \n", "new_path": "numpy/_array_api/_creation_functions.py", "old_path": "numpy/_array_api/_creation_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -2,6 +2,7 @@ from __future__ import annotations\n \n from ._array_object import Array\n \n+from dataclasses import dataclass\n from typing import TYPE_CHECKING\n if TYPE_CHECKING:\n from ._types import List, Tuple, Union, Dtype\n@@ -38,13 +39,44 @@ def can_cast(from_: Union[Dtype, Array], to: Dtype, /) -> bool:\n from_ = from_._array\n return np.can_cast(from_, to)\n \n+# These are internal objects for the return types of finfo and iinfo, since\n+# the NumPy versions contain extra data that isn't part of the spec.\n+@dataclass\n+class finfo_object:\n+ bits: int\n+ # Note: The types of the float data here are float, whereas in NumPy they\n+ # are scalars of the corresponding float dtype.\n+ eps: float\n+ max: float\n+ min: float\n+ # Note: smallest_normal is part of the array API spec, but cannot be used\n+ # until https://github.com/numpy/numpy/pull/18536 is merged.\n+\n+ # smallest_normal: float\n+\n+@dataclass\n+class iinfo_object:\n+ bits: int\n+ max: int\n+ min: int\n+\n def finfo(type: Union[Dtype, Array], /) -> finfo_object:\n \"\"\"\n Array API compatible wrapper for :py:func:`np.finfo <numpy.finfo>`.\n \n See its docstring for more information.\n \"\"\"\n- return np.finfo(type)\n+ fi = np.finfo(type)\n+ # Note: The types of the float data here are float, whereas in NumPy they\n+ # are scalars of the corresponding float dtype.\n+ return finfo_object(\n+ fi.bits,\n+ float(fi.eps),\n+ float(fi.max),\n+ float(fi.min),\n+ # TODO: Uncomment this when #18536 is merged.\n+ # float(fi.smallest_normal),\n+ )\n \n def iinfo(type: Union[Dtype, Array], /) -> iinfo_object:\n \"\"\"\n@@ -52,7 +84,8 @@ def iinfo(type: Union[Dtype, Array], /) -> iinfo_object:\n \n See its docstring for more information.\n \"\"\"\n- return np.iinfo(type)\n+ ii = np.iinfo(type)\n+ return iinfo_object(ii.bits, ii.max, ii.min)\n \n def result_type(*arrays_and_dtypes: Sequence[Union[Array, Dtype]]) -> Dtype:\n \"\"\"\n", "new_path": "numpy/_array_api/_data_type_functions.py", "old_path": "numpy/_array_api/_data_type_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -7,7 +7,7 @@ from typing import List, Optional, Tuple, Union\n import numpy as np\n \n # Note: the function name is different here\n-def concat(arrays: Tuple[Array, ...], /, *, axis: Optional[int] = 0) -> Array:\n+def concat(arrays: Union[Tuple[Array, ...], List[Array]], /, *, axis: Optional[int] = 0) -> Array:\n \"\"\"\n Array API compatible wrapper for :py:func:`np.concatenate <numpy.concatenate>`.\n \n@@ -56,7 +56,7 @@ def squeeze(x: Array, /, axis: Optional[Union[int, Tuple[int, ...]]] = None) ->\n \"\"\"\n return Array._new(np.squeeze(x._array, axis=axis))\n \n-def stack(arrays: Tuple[Array, ...], /, *, axis: int = 0) -> Array:\n+def stack(arrays: Union[Tuple[Array, ...], List[Array]], /, *, axis: int = 0) -> Array:\n \"\"\"\n Array API compatible wrapper for :py:func:`np.stack <numpy.stack>`.\n \n", "new_path": "numpy/_array_api/_manipulation_functions.py", "old_path": "numpy/_array_api/_manipulation_functions.py" } ]
783d157701ea6afa16a620669f89720864e62e9e
cupy/cupy
09.07.2021 18:08:22
MIT License
Make the array API left and right shift do type promotion The spec previously said it should return the type of the left argument, but this was changed to do type promotion to be consistent with all the other elementwise functions/operators.
[ { "change_type": "MODIFY", "diff": "@@ -410,11 +410,8 @@ class Array:\n \"\"\"\n if isinstance(other, (int, float, bool)):\n other = self._promote_scalar(other)\n- # Note: The spec requires the return dtype of bitwise_left_shift, and\n- # hence also __lshift__, to be the same as the first argument.\n- # np.ndarray.__lshift__ returns a type that is the type promotion of\n- # the two input types.\n- res = self._array.__lshift__(other._array).astype(self.dtype)\n+ self, other = self._normalize_two_args(self, other)\n+ res = self._array.__lshift__(other._array)\n return self.__class__._new(res)\n \n def __lt__(self: Array, other: Union[int, float, Array], /) -> Array:\n@@ -517,11 +514,8 @@ class Array:\n \"\"\"\n if isinstance(other, (int, float, bool)):\n other = self._promote_scalar(other)\n- # Note: The spec requires the return dtype of bitwise_right_shift, and\n- # hence also __rshift__, to be the same as the first argument.\n- # np.ndarray.__rshift__ returns a type that is the type promotion of\n- # the two input types.\n- res = self._array.__rshift__(other._array).astype(self.dtype)\n+ self, other = self._normalize_two_args(self, other)\n+ res = self._array.__rshift__(other._array)\n return self.__class__._new(res)\n \n def __setitem__(self, key, value, /):\n@@ -646,11 +640,8 @@ class Array:\n \"\"\"\n if isinstance(other, (int, float, bool)):\n other = self._promote_scalar(other)\n- # Note: The spec requires the return dtype of bitwise_left_shift, and\n- # hence also __lshift__, to be the same as the first argument.\n- # np.ndarray.__lshift__ returns a type that is the type promotion of\n- # the two input types.\n- res = self._array.__rlshift__(other._array).astype(other.dtype)\n+ self, other = self._normalize_two_args(self, other)\n+ res = self._array.__rlshift__(other._array)\n return self.__class__._new(res)\n \n def __imatmul__(self: Array, other: Array, /) -> Array:\n@@ -787,11 +778,8 @@ class Array:\n \"\"\"\n if isinstance(other, (int, float, bool)):\n other = self._promote_scalar(other)\n- # Note: The spec requires the return dtype of bitwise_right_shift, and\n- # hence also __rshift__, to be the same as the first argument.\n- # np.ndarray.__rshift__ returns a type that is the type promotion of\n- # the two input types.\n- res = self._array.__rrshift__(other._array).astype(other.dtype)\n+ self, other = self._normalize_two_args(self, other)\n+ res = self._array.__rrshift__(other._array)\n return self.__class__._new(res)\n \n @np.errstate(all='ignore')\n", "new_path": "numpy/_array_api/_array_object.py", "old_path": "numpy/_array_api/_array_object.py" }, { "change_type": "MODIFY", "diff": "@@ -136,10 +136,7 @@ def bitwise_left_shift(x1: Array, x2: Array, /) -> Array:\n # Note: bitwise_left_shift is only defined for x2 nonnegative.\n if np.any(x2._array < 0):\n raise ValueError('bitwise_left_shift(x1, x2) is only defined for x2 >= 0')\n- # Note: The spec requires the return dtype of bitwise_left_shift to be the\n- # same as the first argument. np.left_shift() returns a type that is the\n- # type promotion of the two input types.\n- return Array._new(np.left_shift(x1._array, x2._array).astype(x1.dtype))\n+ return Array._new(np.left_shift(x1._array, x2._array))\n \n # Note: the function name is different here\n def bitwise_invert(x: Array, /) -> Array:\n@@ -176,10 +173,7 @@ def bitwise_right_shift(x1: Array, x2: Array, /) -> Array:\n # Note: bitwise_right_shift is only defined for x2 nonnegative.\n if np.any(x2._array < 0):\n raise ValueError('bitwise_right_shift(x1, x2) is only defined for x2 >= 0')\n- # Note: The spec requires the return dtype of bitwise_left_shift to be the\n- # same as the first argument. np.left_shift() returns a type that is the\n- # type promotion of the two input types.\n- return Array._new(np.right_shift(x1._array, x2._array).astype(x1.dtype))\n+ return Array._new(np.right_shift(x1._array, x2._array))\n \n def bitwise_xor(x1: Array, x2: Array, /) -> Array:\n \"\"\"\n", "new_path": "numpy/_array_api/_elementwise_functions.py", "old_path": "numpy/_array_api/_elementwise_functions.py" } ]
29535ad693507084ff3691fefd637a6b7292674f
cupy/cupy
21.07.2021 15:45:36
MIT License
Implement the array API result_type() manually np.result_type() has too many behaviors that we want to avoid in the array API namespace, like value-based casting and unwanted type promotions. Instead, we implement the exact type promotion table from the spec.
[ { "change_type": "MODIFY", "diff": "@@ -1,7 +1,7 @@\n from __future__ import annotations\n \n from ._array_object import Array\n-from ._dtypes import _all_dtypes\n+from ._dtypes import _all_dtypes, _result_type\n \n from dataclasses import dataclass\n from typing import TYPE_CHECKING, List, Tuple, Union\n@@ -94,12 +94,24 @@ def result_type(*arrays_and_dtypes: Sequence[Union[Array, Dtype]]) -> Dtype:\n \n See its docstring for more information.\n \"\"\"\n+ # Note: we use a custom implementation that gives only the type promotions\n+ # required by the spec rather than using np.result_type. NumPy implements\n+ # too many extra type promotions like int64 + uint64 -> float64, and does\n+ # value-based casting on scalar arrays.\n A = []\n for a in arrays_and_dtypes:\n if isinstance(a, Array):\n- a = a._array\n+ a = a.dtype\n elif isinstance(a, np.ndarray) or a not in _all_dtypes:\n raise TypeError(\"result_type() inputs must be array_api arrays or dtypes\")\n A.append(a)\n \n- return np.result_type(*A)\n+ if len(A) == 0:\n+ raise ValueError(\"at least one array or dtype is required\")\n+ elif len(A) == 1:\n+ return A[0]\n+ else:\n+ t = A[0]\n+ for t2 in A[1:]:\n+ t = _result_type(t, t2)\n+ return t\n", "new_path": "numpy/_array_api/_data_type_functions.py", "old_path": "numpy/_array_api/_data_type_functions.py" }, { "change_type": "MODIFY", "diff": "@@ -22,3 +22,72 @@ _floating_dtypes = (float32, float64)\n _integer_dtypes = (int8, int16, int32, int64, uint8, uint16, uint32, uint64)\n _integer_or_boolean_dtypes = (bool, int8, int16, int32, int64, uint8, uint16, uint32, uint64)\n _numeric_dtypes = (float32, float64, int8, int16, int32, int64, uint8, uint16, uint32, uint64)\n+\n+_promotion_table = {\n+ (int8, int8): int8,\n+ (int8, int16): int16,\n+ (int8, int32): int32,\n+ (int8, int64): int64,\n+ (int16, int8): int16,\n+ (int16, int16): int16,\n+ (int16, int32): int32,\n+ (int16, int64): int64,\n+ (int32, int8): int32,\n+ (int32, int16): int32,\n+ (int32, int32): int32,\n+ (int32, int64): int64,\n+ (int64, int8): int64,\n+ (int64, int16): int64,\n+ (int64, int32): int64,\n+ (int64, int64): int64,\n+ (uint8, uint8): uint8,\n+ (uint8, uint16): uint16,\n+ (uint8, uint32): uint32,\n+ (uint8, uint64): uint64,\n+ (uint16, uint8): uint16,\n+ (uint16, uint16): uint16,\n+ (uint16, uint32): uint32,\n+ (uint16, uint64): uint64,\n+ (uint32, uint8): uint32,\n+ (uint32, uint16): uint32,\n+ (uint32, uint32): uint32,\n+ (uint32, uint64): uint64,\n+ (uint64, uint8): uint64,\n+ (uint64, uint16): uint64,\n+ (uint64, uint32): uint64,\n+ (uint64, uint64): uint64,\n+ (int8, uint8): int16,\n+ (int8, uint16): int32,\n+ (int8, uint32): int64,\n+ (int16, uint8): int16,\n+ (int16, uint16): int32,\n+ (int16, uint32): int64,\n+ (int32, uint8): int32,\n+ (int32, uint16): int32,\n+ (int32, uint32): int64,\n+ (int64, uint8): int64,\n+ (int64, uint16): int64,\n+ (int64, uint32): int64,\n+ (uint8, int8): int16,\n+ (uint16, int8): int32,\n+ (uint32, int8): int64,\n+ (uint8, int16): int16,\n+ (uint16, int16): int32,\n+ (uint32, int16): int64,\n+ (uint8, int32): int32,\n+ (uint16, int32): int32,\n+ (uint32, int32): int64,\n+ (uint8, int64): int64,\n+ (uint16, int64): int64,\n+ (uint32, int64): int64,\n+ (float32, float32): float32,\n+ (float32, float64): float64,\n+ (float64, float32): float64,\n+ (float64, float64): float64,\n+ (bool, bool): bool,\n+}\n+\n+def _result_type(type1, type2):\n+ if (type1, type2) in _promotion_table:\n+ return _promotion_table[type1, type2]\n+ raise TypeError(f\"{type1} and {type2} cannot be type promoted together\")\n", "new_path": "numpy/_array_api/_dtypes.py", "old_path": "numpy/_array_api/_dtypes.py" } ]
4877478d275959f746dab4f7b91bfe68956f26f1
netflix/security_monkey
26.01.2018 18:59:26
Apache License 2.0
Fix for orphaned items that may develop from a failed watcher event. - Also added optional (but on by default) silencing of verbose and useless botocore logs.
[ { "change_type": "MODIFY", "diff": "@@ -95,7 +95,6 @@ def create_item(item, technology, account):\n )\n \n \n-\n def detect_change(item, account, technology, complete_hash, durable_hash):\n \"\"\"\n Checks the database to see if the latest revision of the specified\n", "new_path": "security_monkey/datastore_utils.py", "old_path": "security_monkey/datastore_utils.py" }, { "change_type": "MODIFY", "diff": "@@ -12,7 +12,7 @@ import traceback\n \n from security_monkey import app, db, jirasync, sentry\n from security_monkey.alerter import Alerter\n-from security_monkey.datastore import store_exception, clear_old_exceptions\n+from security_monkey.datastore import store_exception, clear_old_exceptions, Technology, Account, Item, ItemRevision\n from security_monkey.monitors import get_monitors, get_monitors_and_dependencies\n from security_monkey.reporter import Reporter\n from security_monkey.task_scheduler.util import CELERY, setup\n@@ -70,9 +70,57 @@ def clear_expired_exceptions():\n app.logger.info(\"[-] Completed clearing out exceptions that have an expired TTL.\")\n \n \n+def fix_orphaned_deletions(account_name, technology_name):\n+ \"\"\"\n+ Possible issue with orphaned items. This will check if there are any, and will assume that the item\n+ was deleted. This will create a deletion change record to it.\n+\n+ :param account_name:\n+ :param technology_name:\n+ :return:\n+ \"\"\"\n+ # If technology doesn't exist, then create it:\n+ technology = Technology.query.filter(Technology.name == technology_name).first()\n+ if not technology:\n+ technology = Technology(name=technology_name)\n+ db.session.add(technology)\n+ db.session.commit()\n+ app.logger.info(\"Technology: {} did not exist... created it...\".format(technology_name))\n+\n+ account = Account.query.filter(Account.name == account_name).one()\n+\n+ # Query for orphaned items of the given technology/account pair:\n+ orphaned_items = Item.query.filter(Item.account_id == account.id, Item.tech_id == technology.id,\n+ Item.latest_revision_id == None).all() # noqa\n+\n+ if not orphaned_items:\n+ app.logger.info(\"[@] No orphaned items have been found. (This is good)\")\n+ return\n+\n+ # Fix the orphaned items:\n+ for oi in orphaned_items:\n+ app.logger.error(\"[?] Found an orphaned item: {}. Creating a deletion record for it\".format(oi.name))\n+ revision = ItemRevision(active=False, config={})\n+ oi.revisions.append(revision)\n+ db.session.add(revision)\n+ db.session.add(oi)\n+ db.session.commit()\n+\n+ # Update the latest revision id:\n+ db.session.refresh(revision)\n+ oi.latest_revision_id = revision.id\n+ db.session.add(oi)\n+\n+ db.session.commit()\n+ app.logger.info(\"[-] Created deletion record for item: {}.\".format(oi.name))\n+\n+\n def reporter_logic(account_name, technology_name):\n \"\"\"Logic for the run change reporter\"\"\"\n try:\n+ # Before doing anything... Look for orphaned items for this given technology. If they exist, then delete them:\n+ fix_orphaned_deletions(account_name, technology_name)\n+\n # Watch and Audit:\n monitors = find_changes(account_name, technology_name)\n \n@@ -140,6 +188,9 @@ def find_changes(account_name, monitor_name, debug=True):\n Runs the watcher and stores the result, re-audits all types to account\n for downstream dependencies.\n \"\"\"\n+ # Before doing anything... Look for orphaned items for this given technology. If they exist, then delete them:\n+ fix_orphaned_deletions(account_name, monitor_name)\n+\n monitors = get_monitors(account_name, [monitor_name], debug)\n for mon in monitors:\n cw = mon.watcher\n", "new_path": "security_monkey/task_scheduler/tasks.py", "old_path": "security_monkey/task_scheduler/tasks.py" }, { "change_type": "MODIFY", "diff": "@@ -84,7 +84,8 @@ class CelerySchedulerTestCase(SecurityMonkeyTestCase):\n \n db.session.commit()\n \n- def test_find_batch_changes(self):\n+ @patch(\"security_monkey.task_scheduler.tasks.fix_orphaned_deletions\")\n+ def test_find_batch_changes(self, mock_fix_orphaned):\n \"\"\"\n Runs through a full find job via the IAMRole watcher, as that supports batching.\n \n@@ -92,7 +93,7 @@ class CelerySchedulerTestCase(SecurityMonkeyTestCase):\n not going to do any boto work and that will instead be mocked out.\n :return:\n \"\"\"\n- from security_monkey.task_scheduler.tasks import manual_run_change_finder, setup\n+ from security_monkey.task_scheduler.tasks import manual_run_change_finder\n from security_monkey.monitors import Monitor\n from security_monkey.watchers.iam.iam_role import IAMRole\n from security_monkey.auditors.iam.iam_role import IAMRoleAuditor\n@@ -142,6 +143,7 @@ class CelerySchedulerTestCase(SecurityMonkeyTestCase):\n watcher.slurp = mock_slurp\n \n manual_run_change_finder([test_account.name], [watcher.index])\n+ assert mock_fix_orphaned.called\n \n # Check that all items were added to the DB:\n assert len(Item.query.all()) == 11\n@@ -271,8 +273,9 @@ class CelerySchedulerTestCase(SecurityMonkeyTestCase):\n client.put_role_policy(RoleName=\"roleNumber{}\".format(x), PolicyName=\"testpolicy\",\n PolicyDocument=json.dumps(OPEN_POLICY, indent=4))\n \n- def test_report_batch_changes(self):\n- from security_monkey.task_scheduler.tasks import manual_run_change_reporter, setup\n+ @patch(\"security_monkey.task_scheduler.tasks.fix_orphaned_deletions\")\n+ def test_report_batch_changes(self, mock_fix_orphaned):\n+ from security_monkey.task_scheduler.tasks import manual_run_change_reporter\n from security_monkey.datastore import Item, ItemRevision, ItemAudit\n from security_monkey.monitors import Monitor\n from security_monkey.watchers.iam.iam_role import IAMRole\n@@ -327,6 +330,8 @@ class CelerySchedulerTestCase(SecurityMonkeyTestCase):\n \n manual_run_change_reporter([test_account.name])\n \n+ assert mock_fix_orphaned.called\n+\n # Check that all items were added to the DB:\n assert len(Item.query.all()) == 11\n \n@@ -348,6 +353,32 @@ class CelerySchedulerTestCase(SecurityMonkeyTestCase):\n purge_it()\n assert mock.control.purge.called\n \n+ def test_fix_orphaned_deletions(self):\n+ test_account = Account.query.filter(Account.name == \"TEST_ACCOUNT1\").one()\n+ technology = Technology(name=\"orphaned\")\n+\n+ db.session.add(technology)\n+ db.session.commit()\n+\n+ orphaned_item = Item(name=\"orphaned\", region=\"us-east-1\", tech_id=technology.id, account_id=test_account.id)\n+ db.session.add(orphaned_item)\n+ db.session.commit()\n+\n+ assert not orphaned_item.latest_revision_id\n+ assert not orphaned_item.revisions.count()\n+ assert len(Item.query.filter(Item.account_id == test_account.id, Item.tech_id == technology.id,\n+ Item.latest_revision_id == None).all()) == 1 # noqa\n+\n+ from security_monkey.task_scheduler.tasks import fix_orphaned_deletions\n+ fix_orphaned_deletions(test_account.name, technology.name)\n+\n+ assert not Item.query.filter(Item.account_id == test_account.id, Item.tech_id == technology.id,\n+ Item.latest_revision_id == None).all() # noqa\n+\n+ assert orphaned_item.latest_revision_id\n+ assert orphaned_item.revisions.count() == 1\n+ assert orphaned_item.latest_config == {}\n+\n @patch(\"security_monkey.task_scheduler.beat.setup\")\n @patch(\"security_monkey.task_scheduler.beat.purge_it\")\n @patch(\"security_monkey.task_scheduler.tasks.task_account_tech\")\n", "new_path": "security_monkey/tests/scheduling/test_celery_scheduler.py", "old_path": "security_monkey/tests/scheduling/test_celery_scheduler.py" }, { "change_type": "MODIFY", "diff": "@@ -26,10 +26,17 @@ from copy import deepcopy\n import dpath.util\n from dpath.exceptions import PathNotFound\n \n+import logging\n+\n watcher_registry = {}\n abstract_classes = set(['Watcher', 'CloudAuxWatcher', 'CloudAuxBatchedWatcher'])\n \n \n+if not app.config.get(\"DONT_IGNORE_BOTO_VERBOSE_LOGGERS\"):\n+ logging.getLogger('botocore.vendored.requests.packages.urllib3').setLevel(logging.WARNING)\n+ logging.getLogger('botocore.credentials').setLevel(logging.WARNING)\n+\n+\n class WatcherType(type):\n def __init__(cls, name, bases, attrs):\n super(WatcherType, cls).__init__(name, bases, attrs)\n", "new_path": "security_monkey/watcher.py", "old_path": "security_monkey/watcher.py" }, { "change_type": "MODIFY", "diff": "@@ -67,10 +67,15 @@ class SQS(CloudAuxBatchedWatcher):\n \n # Offset by the existing items in the list (from other regions)\n offset = len(self.corresponding_items)\n+ queue_count = -1\n \n- for i in range(0, len(queues)):\n- items.append({\"Url\": queues[i], \"Region\": kwargs[\"region\"]})\n- self.corresponding_items[queues[i]] = i + offset\n+ for item_count in range(0, len(queues)):\n+ if self.corresponding_items.get(queues[item_count]):\n+ app.logger.error(\"[?] Received a duplicate item in the SQS list: {}. Skipping it.\".format(queues[item_count]))\n+ continue\n+ queue_count += 1\n+ items.append({\"Url\": queues[item_count], \"Region\": kwargs[\"region\"]})\n+ self.corresponding_items[queues[item_count]] = queue_count + offset\n \n return items\n \n", "new_path": "security_monkey/watchers/sqs.py", "old_path": "security_monkey/watchers/sqs.py" } ]
84fd14194ddaa5b890e4479def071ce53a93b9d4
netflix/security_monkey
07.05.2018 10:58:36
Apache License 2.0
Add options to post metrics to queue This commit adds an option to SM to post metrics to cloudwatch. Metric data will be posted whenever scan queue items are added or removed.
[ { "change_type": "MODIFY", "diff": "@@ -5,6 +5,7 @@ This document outlines how to configure Security Monkey to:\n \n 1. Automatically run the API\n 1. Automatically scan for changes in your environment.\n+1. Configure Security Monkey to send scanning performance metrics\n \n Each section is important, please read them thoroughly.\n \n@@ -180,6 +181,11 @@ Supervisor will run the Celery `worker` command, which is:\n so keep the supervisor configurations on these instances separate.\n \n \n+Configure Security Monkey to send scanning performance metrics\n+--------------------------------------------------------------\n+Security Monkey can be configured to send metrics when objects are added or removed from the scanning queue. This allows operators to check Security Monkey performance and ensure that items are being processed from the queue in a timely manner. To do so set `METRICS_ENABLED` to `True`. You will need `cloudwatch:PutMetricData` permission. Metrics will be posted with the namespace `securitymonkey` unless configured using the variable `METRICS_NAMESPACE`. You will also want to set `METRICS_POST_REGION` with the region you want to post CloudWatch Metrics to (default: `us-east-1`).\n+\n+\n Deployment Strategies\n --------------------\n A typical deployment strategy is:\n", "new_path": "docs/autostarting.md", "old_path": "docs/autostarting.md" }, { "change_type": "MODIFY", "diff": "@@ -26,6 +26,7 @@ from security_monkey.datastore import store_exception, clear_old_exceptions, Tec\n from security_monkey.monitors import get_monitors, get_monitors_and_dependencies\n from security_monkey.reporter import Reporter\n from security_monkey.task_scheduler.util import CELERY, setup\n+import boto3\n from sqlalchemy.exc import OperationalError, InvalidRequestError, StatementError\n \n \n@@ -216,6 +217,8 @@ def find_changes(account_name, monitor_name, debug=True):\n fix_orphaned_deletions(account_name, monitor_name)\n \n monitors = get_monitors(account_name, [monitor_name], debug)\n+\n+ items = []\n for mon in monitors:\n cw = mon.watcher\n app.logger.info(\"[-->] Looking for changes in account: {}, technology: {}\".format(account_name, cw.index))\n@@ -224,17 +227,26 @@ def find_changes(account_name, monitor_name, debug=True):\n else:\n # Just fetch normally...\n (items, exception_map) = cw.slurp()\n+\n+ _post_metric(\n+ 'queue_items_added',\n+ len(items),\n+ account_name=account_name,\n+ tech=cw.i_am_singular\n+ )\n+\n cw.find_changes(current=items, exception_map=exception_map)\n+\n cw.save()\n \n # Batched monitors have already been monitored, and they will be skipped over.\n- audit_changes([account_name], [monitor_name], False, debug)\n+ audit_changes([account_name], [monitor_name], False, debug, items_count=len(items))\n db.session.close()\n \n return monitors\n \n \n-def audit_changes(accounts, monitor_names, send_report, debug=True, skip_batch=True):\n+def audit_changes(accounts, monitor_names, send_report, debug=True, skip_batch=True, items_count=None):\n \"\"\"\n Audits changes in the accounts\n :param accounts:\n@@ -254,6 +266,13 @@ def audit_changes(accounts, monitor_names, send_report, debug=True, skip_batch=T\n app.logger.debug(\"[-->] Auditing account: {}, technology: {}\".format(account, monitor.watcher.index))\n _audit_changes(account, monitor.auditors, send_report, debug)\n \n+ _post_metric(\n+ 'queue_items_completed',\n+ items_count,\n+ account_name=account,\n+ tech=monitor.watcher.i_am_singular\n+ )\n+\n \n def batch_logic(monitor, current_watcher, account_name, debug):\n \"\"\"\n@@ -293,9 +312,23 @@ def batch_logic(monitor, current_watcher, account_name, debug):\n ))\n (items, exception_map) = current_watcher.slurp()\n \n+ _post_metric(\n+ 'queue_items_added',\n+ len(items),\n+ account_name=account_name,\n+ tech=current_watcher.i_am_singular\n+ )\n+\n audit_items = current_watcher.find_changes(current=items, exception_map=exception_map)\n _audit_specific_changes(monitor, audit_items, False, debug)\n \n+ _post_metric(\n+ 'queue_items_completed',\n+ len(items),\n+ account_name=account_name,\n+ tech=current_watcher.i_am_singular\n+ )\n+\n # Delete the items that no longer exist:\n app.logger.debug(\"[-->] Deleting all items for {technology}/{account} that no longer exist.\".format(\n technology=current_watcher.i_am_plural, account=account_name\n@@ -349,3 +382,31 @@ def _audit_specific_changes(monitor, audit_items, send_report, debug=True):\n monitor.watcher.accounts[0])\n db.session.remove()\n store_exception(\"scheduler-audit-changes\", None, e)\n+\n+\n+def _post_metric(event_type, amount, account_name=None, tech=None):\n+ if not app.config.get('METRICS_ENABLED', False):\n+ return\n+\n+ cw_client = boto3.client('cloudwatch', region_name=app.config.get('METRICS_POST_REGION', 'us-east-1'))\n+ cw_client.put_metric_data(\n+ Namespace=app.config.get('METRICS_NAMESPACE', 'securitymonkey'),\n+ MetricData=[\n+ {\n+ 'MetricName': event_type,\n+ 'Timestamp': int(time.time()),\n+ 'Value': amount,\n+ 'Unit': 'Count',\n+ 'Dimensions': [\n+ {\n+ 'Name': 'tech',\n+ 'Value': tech\n+ },\n+ {\n+ 'Name': 'account_number',\n+ 'Value': Account.query.filter(Account.name == account_name).first().identifier\n+ }\n+ ]\n+ }\n+ ]\n+ )\n", "new_path": "security_monkey/task_scheduler/tasks.py", "old_path": "security_monkey/task_scheduler/tasks.py" } ]
0b2146c8f794d5642a0a4feb9152916b49fd4be8
mesonbuild/meson
06.02.2017 11:51:46
Apache License 2.0
Use named field for command_template when generating ninja command. The command template become easier to read with named field.
[ { "change_type": "MODIFY", "diff": "@@ -1232,15 +1232,16 @@ int dummy;\n return\n rule = 'rule STATIC%s_LINKER\\n' % crstr\n if mesonlib.is_windows():\n- command_templ = ''' command = %s @$out.rsp\n+ command_template = ''' command = {executable} @$out.rsp\n rspfile = $out.rsp\n- rspfile_content = $LINK_ARGS %s $in\n+ rspfile_content = $LINK_ARGS {output_args} $in\n '''\n else:\n- command_templ = ' command = %s $LINK_ARGS %s $in\\n'\n- command = command_templ % (\n- ' '.join(static_linker.get_exelist()),\n- ' '.join(static_linker.get_output_args('$out')))\n+ command_template = ' command = {executable} $LINK_ARGS {output_args} $in\\n'\n+ command = command_template.format(\n+ executable=' '.join(static_linker.get_exelist()),\n+ output_args=' '.join(static_linker.get_output_args('$out'))\n+ )\n description = ' description = Static linking library $out\\n\\n'\n outfile.write(rule)\n outfile.write(command)\n@@ -1273,16 +1274,17 @@ int dummy;\n pass\n rule = 'rule %s%s_LINKER\\n' % (langname, crstr)\n if mesonlib.is_windows():\n- command_template = ''' command = %s @$out.rsp\n+ command_template = ''' command = {executable} @$out.rsp\n rspfile = $out.rsp\n- rspfile_content = %s $ARGS %s $in $LINK_ARGS $aliasing\n+ rspfile_content = {cross_args} $ARGS {output_args} $in $LINK_ARGS $aliasing\n '''\n else:\n- command_template = ' command = %s %s $ARGS %s $in $LINK_ARGS $aliasing\\n'\n- command = command_template % (\n- ' '.join(compiler.get_linker_exelist()),\n- ' '.join(cross_args),\n- ' '.join(compiler.get_linker_output_args('$out')))\n+ command_template = ' command = {executable} {cross_args} $ARGS {output_args} $in $LINK_ARGS $aliasing\\n'\n+ command = command_template.format(\n+ executable=' '.join(compiler.get_linker_exelist()),\n+ cross_args=' '.join(cross_args),\n+ output_args=' '.join(compiler.get_linker_output_args('$out'))\n+ )\n description = ' description = Linking target $out'\n outfile.write(rule)\n outfile.write(command)\n@@ -1386,17 +1388,18 @@ rule FORTRAN_DEP_HACK\n if getattr(self, 'created_llvm_ir_rule', False):\n return\n rule = 'rule llvm_ir{}_COMPILER\\n'.format('_CROSS' if is_cross else '')\n- args = [' '.join([ninja_quote(i) for i in compiler.get_exelist()]),\n- ' '.join(self.get_cross_info_lang_args(compiler.language, is_cross)),\n- ' '.join(compiler.get_output_args('$out')),\n- ' '.join(compiler.get_compile_only_args())]\n if mesonlib.is_windows():\n- command_template = ' command = {} @$out.rsp\\n' \\\n+ command_template = ' command = {executable} @$out.rsp\\n' \\\n ' rspfile = $out.rsp\\n' \\\n- ' rspfile_content = {} $ARGS {} {} $in\\n'\n+ ' rspfile_content = {cross_args} $ARGS {output_args} {compile_only_args} $in\\n'\n else:\n- command_template = ' command = {} {} $ARGS {} {} $in\\n'\n- command = command_template.format(*args)\n+ command_template = ' command = {executable} {cross_args} $ARGS {output_args} {compile_only_args} $in\\n'\n+ command = command_template.format(\n+ executable=' '.join([ninja_quote(i) for i in compiler.get_exelist()]),\n+ cross_args=' '.join(self.get_cross_info_lang_args(compiler.language, is_cross)),\n+ output_args=' '.join(compiler.get_output_args('$out')),\n+ compile_only_args=' '.join(compiler.get_compile_only_args())\n+ )\n description = ' description = Compiling LLVM IR object $in.\\n'\n outfile.write(rule)\n outfile.write(command)\n@@ -1448,18 +1451,19 @@ rule FORTRAN_DEP_HACK\n quoted_depargs.append(d)\n cross_args = self.get_cross_info_lang_args(langname, is_cross)\n if mesonlib.is_windows():\n- command_template = ''' command = %s @$out.rsp\n+ command_template = ''' command = {executable} @$out.rsp\n rspfile = $out.rsp\n- rspfile_content = %s $ARGS %s %s %s $in\n+ rspfile_content = {cross_args} $ARGS {dep_args} {output_args} {compile_only_args} $in\n '''\n else:\n- command_template = ' command = %s %s $ARGS %s %s %s $in\\n'\n- command = command_template % (\n- ' '.join([ninja_quote(i) for i in compiler.get_exelist()]),\n- ' '.join(cross_args),\n- ' '.join(quoted_depargs),\n- ' '.join(compiler.get_output_args('$out')),\n- ' '.join(compiler.get_compile_only_args()))\n+ command_template = ' command = {executable} {cross_args} $ARGS {dep_args} {output_args} {compile_only_args} $in\\n'\n+ command = command_template.format(\n+ executable=' '.join([ninja_quote(i) for i in compiler.get_exelist()]),\n+ cross_args=' '.join(cross_args),\n+ dep_args=' '.join(quoted_depargs),\n+ output_args=' '.join(compiler.get_output_args('$out')),\n+ compile_only_args=' '.join(compiler.get_compile_only_args())\n+ )\n description = ' description = Compiling %s object $out\\n' % langname\n if compiler.get_id() == 'msvc':\n deps = ' deps = msvc\\n'\n@@ -1497,12 +1501,13 @@ rule FORTRAN_DEP_HACK\n output = ''\n else:\n output = ' '.join(compiler.get_output_args('$out'))\n- command = \" command = %s %s $ARGS %s %s %s $in\\n\" % (\n- ' '.join(compiler.get_exelist()),\n- ' '.join(cross_args),\n- ' '.join(quoted_depargs),\n- output,\n- ' '.join(compiler.get_compile_only_args()))\n+ command = \" command = {executable} {cross_args} $ARGS {dep_args} {output_args} {compile_only_args} $in\\n\".format(\n+ executable=' '.join(compiler.get_exelist()),\n+ cross_args=' '.join(cross_args),\n+ dep_args=' '.join(quoted_depargs),\n+ output_args=output,\n+ compile_only_args=' '.join(compiler.get_compile_only_args())\n+ )\n description = ' description = Precompiling header %s\\n' % '$in'\n if compiler.get_id() == 'msvc':\n deps = ' deps = msvc\\n'\n", "new_path": "mesonbuild/backend/ninjabackend.py", "old_path": "mesonbuild/backend/ninjabackend.py" } ]
73b2ee08a884d6baa7b8e3c35c6da8f17aa9a875
mesonbuild/meson
13.02.2017 20:59:03
Apache License 2.0
Rewrite custom_target template string substitution Factor it out into a function in mesonlib.py. This will allow us to reuse it for generators and for configure_file(). The latter doesn't implement this at all right now. Also includes unit tests.
[ { "change_type": "MODIFY", "diff": "@@ -603,19 +603,15 @@ class Backend:\n return srcs\n \n def eval_custom_target_command(self, target, absolute_outputs=False):\n- # We only want the outputs to be absolute when using the VS backend\n- if not absolute_outputs:\n- ofilenames = [os.path.join(self.get_target_dir(target), i) for i in target.output]\n- else:\n- ofilenames = [os.path.join(self.environment.get_build_dir(), self.get_target_dir(target), i)\n- for i in target.output]\n- srcs = self.get_custom_target_sources(target)\n+ # We want the outputs to be absolute only when using the VS backend\n outdir = self.get_target_dir(target)\n- # Many external programs fail on empty arguments.\n- if outdir == '':\n- outdir = '.'\n- if target.absolute_paths:\n+ if absolute_outputs:\n outdir = os.path.join(self.environment.get_build_dir(), outdir)\n+ outputs = []\n+ for i in target.output:\n+ outputs.append(os.path.join(outdir, i))\n+ inputs = self.get_custom_target_sources(target)\n+ # Evaluate the command list\n cmd = []\n for i in target.command:\n if isinstance(i, build.Executable):\n@@ -631,37 +627,10 @@ class Backend:\n if target.absolute_paths:\n i = os.path.join(self.environment.get_build_dir(), i)\n # FIXME: str types are blindly added ignoring 'target.absolute_paths'\n+ # because we can't know if they refer to a file or just a string\n elif not isinstance(i, str):\n err_msg = 'Argument {0} is of unknown type {1}'\n raise RuntimeError(err_msg.format(str(i), str(type(i))))\n- for (j, src) in enumerate(srcs):\n- i = i.replace('@INPUT%d@' % j, src)\n- for (j, res) in enumerate(ofilenames):\n- i = i.replace('@OUTPUT%d@' % j, res)\n- if '@INPUT@' in i:\n- msg = 'Custom target {} has @INPUT@ in the command, but'.format(target.name)\n- if len(srcs) == 0:\n- raise MesonException(msg + ' no input files')\n- if i == '@INPUT@':\n- cmd += srcs\n- continue\n- else:\n- if len(srcs) > 1:\n- raise MesonException(msg + ' more than one input file')\n- i = i.replace('@INPUT@', srcs[0])\n- elif '@OUTPUT@' in i:\n- msg = 'Custom target {} has @OUTPUT@ in the command, but'.format(target.name)\n- if len(ofilenames) == 0:\n- raise MesonException(msg + ' no output files')\n- if i == '@OUTPUT@':\n- cmd += ofilenames\n- continue\n- else:\n- if len(ofilenames) > 1:\n- raise MesonException(msg + ' more than one output file')\n- i = i.replace('@OUTPUT@', ofilenames[0])\n- elif '@OUTDIR@' in i:\n- i = i.replace('@OUTDIR@', outdir)\n elif '@DEPFILE@' in i:\n if target.depfile is None:\n msg = 'Custom target {!r} has @DEPFILE@ but no depfile ' \\\n@@ -680,10 +649,11 @@ class Backend:\n lead_dir = ''\n else:\n lead_dir = self.environment.get_build_dir()\n- i = i.replace(source,\n- os.path.join(lead_dir,\n- outdir))\n+ i = i.replace(source, os.path.join(lead_dir, outdir))\n cmd.append(i)\n+ # Substitute the rest of the template strings\n+ values = mesonlib.get_filenames_templates_dict(inputs, outputs)\n+ cmd = mesonlib.substitute_values(cmd, values)\n # This should not be necessary but removing it breaks\n # building GStreamer on Windows. The underlying issue\n # is problems with quoting backslashes on Windows\n@@ -703,7 +673,7 @@ class Backend:\n #\n # https://github.com/mesonbuild/meson/pull/737\n cmd = [i.replace('\\\\', '/') for i in cmd]\n- return srcs, ofilenames, cmd\n+ return inputs, outputs, cmd\n \n def run_postconf_scripts(self):\n env = {'MESON_SOURCE_ROOT': self.environment.get_source_dir(),\n", "new_path": "mesonbuild/backend/backends.py", "old_path": "mesonbuild/backend/backends.py" }, { "change_type": "MODIFY", "diff": "@@ -1530,3 +1530,22 @@ class TestSetup:\n self.gdb = gdb\n self.timeout_multiplier = timeout_multiplier\n self.env = env\n+\n+def get_sources_output_names(sources):\n+ '''\n+ For the specified list of @sources which can be strings, Files, or targets,\n+ get all the output basenames.\n+ '''\n+ names = []\n+ for s in sources:\n+ if hasattr(s, 'held_object'):\n+ s = s.held_object\n+ if isinstance(s, str):\n+ names.append(s)\n+ elif isinstance(s, (BuildTarget, CustomTarget, GeneratedList)):\n+ names += s.get_outputs()\n+ elif isinstance(s, File):\n+ names.append(s.fname)\n+ else:\n+ raise AssertionError('Unknown source type: {!r}'.format(s))\n+ return names\n", "new_path": "mesonbuild/build.py", "old_path": "mesonbuild/build.py" }, { "change_type": "MODIFY", "diff": "@@ -521,3 +521,154 @@ def commonpath(paths):\n new = os.path.join(*new)\n common = pathlib.PurePath(new)\n return str(common)\n+\n+def iter_regexin_iter(regexiter, initer):\n+ '''\n+ Takes each regular expression in @regexiter and tries to search for it in\n+ every item in @initer. If there is a match, returns that match.\n+ Else returns False.\n+ '''\n+ for regex in regexiter:\n+ for ii in initer:\n+ if not isinstance(ii, str):\n+ continue\n+ match = re.search(regex, ii)\n+ if match:\n+ return match.group()\n+ return False\n+\n+def _substitute_values_check_errors(command, values):\n+ # Error checking\n+ inregex = ('@INPUT([0-9]+)?@', '@PLAINNAME@', '@BASENAME@')\n+ outregex = ('@OUTPUT([0-9]+)?@', '@OUTDIR@')\n+ if '@INPUT@' not in values:\n+ # Error out if any input-derived templates are present in the command\n+ match = iter_regexin_iter(inregex, command)\n+ if match:\n+ m = 'Command cannot have {!r}, since no input files were specified'\n+ raise MesonException(m.format(match))\n+ else:\n+ if len(values['@INPUT@']) > 1:\n+ # Error out if @PLAINNAME@ or @BASENAME@ is present in the command\n+ match = iter_regexin_iter(inregex[1:], command)\n+ if match:\n+ raise MesonException('Command cannot have {!r} when there is '\n+ 'more than one input file'.format(match))\n+ # Error out if an invalid @INPUTnn@ template was specified\n+ for each in command:\n+ if not isinstance(each, str):\n+ continue\n+ match = re.search(inregex[0], each)\n+ if match and match.group() not in values:\n+ m = 'Command cannot have {!r} since there are only {!r} inputs'\n+ raise MesonException(m.format(match.group(), len(values['@INPUT@'])))\n+ if '@OUTPUT@' not in values:\n+ # Error out if any output-derived templates are present in the command\n+ match = iter_regexin_iter(outregex, command)\n+ if match:\n+ m = 'Command cannot have {!r} since there are no outputs'\n+ raise MesonException(m.format(match))\n+ else:\n+ # Error out if an invalid @OUTPUTnn@ template was specified\n+ for each in command:\n+ if not isinstance(each, str):\n+ continue\n+ match = re.search(outregex[0], each)\n+ if match and match.group() not in values:\n+ m = 'Command cannot have {!r} since there are only {!r} outputs'\n+ raise MesonException(m.format(match.group(), len(values['@OUTPUT@'])))\n+\n+def substitute_values(command, values):\n+ '''\n+ Substitute the template strings in the @values dict into the list of\n+ strings @command and return a new list. For a full list of the templates,\n+ see get_filenames_templates_dict()\n+\n+ If multiple inputs/outputs are given in the @values dictionary, we\n+ substitute @INPUT@ and @OUTPUT@ only if they are the entire string, not\n+ just a part of it, and in that case we substitute *all* of them.\n+ '''\n+ # Error checking\n+ _substitute_values_check_errors(command, values)\n+ # Substitution\n+ outcmd = []\n+ for vv in command:\n+ if not isinstance(vv, str):\n+ outcmd.append(vv)\n+ elif '@INPUT@' in vv:\n+ inputs = values['@INPUT@']\n+ if vv == '@INPUT@':\n+ outcmd += inputs\n+ elif len(inputs) == 1:\n+ outcmd.append(vv.replace('@INPUT@', inputs[0]))\n+ else:\n+ raise MesonException(\"Command has '@INPUT@' as part of a \"\n+ \"string and more than one input file\")\n+ elif '@OUTPUT@' in vv:\n+ outputs = values['@OUTPUT@']\n+ if vv == '@OUTPUT@':\n+ outcmd += outputs\n+ elif len(outputs) == 1:\n+ outcmd.append(vv.replace('@OUTPUT@', outputs[0]))\n+ else:\n+ raise MesonException(\"Command has '@OUTPUT@' as part of a \"\n+ \"string and more than one output file\")\n+ # Append values that are exactly a template string.\n+ # This is faster than a string replace.\n+ elif vv in values:\n+ outcmd.append(values[vv])\n+ # Substitute everything else with replacement\n+ else:\n+ for key, value in values.items():\n+ if key in ('@INPUT@', '@OUTPUT@'):\n+ # Already done above\n+ continue\n+ vv = vv.replace(key, value)\n+ outcmd.append(vv)\n+ return outcmd\n+\n+def get_filenames_templates_dict(inputs, outputs):\n+ '''\n+ Create a dictionary with template strings as keys and values as values for\n+ the following templates:\n+\n+ @INPUT@ - the full path to one or more input files, from @inputs\n+ @OUTPUT@ - the full path to one or more output files, from @outputs\n+ @OUTDIR@ - the full path to the directory containing the output files\n+\n+ If there is only one input file, the following keys are also created:\n+\n+ @PLAINNAME@ - the filename of the input file\n+ @BASENAME@ - the filename of the input file with the extension removed\n+\n+ If there is more than one input file, the following keys are also created:\n+\n+ @INPUT0@, @INPUT1@, ... one for each input file\n+\n+ If there is more than one output file, the following keys are also created:\n+\n+ @OUTPUT0@, @OUTPUT1@, ... one for each output file\n+ '''\n+ values = {}\n+ # Gather values derived from the input\n+ if inputs:\n+ # We want to substitute all the inputs.\n+ values['@INPUT@'] = inputs\n+ for (ii, vv) in enumerate(inputs):\n+ # Write out @INPUT0@, @INPUT1@, ...\n+ values['@INPUT{}@'.format(ii)] = vv\n+ if len(inputs) == 1:\n+ # Just one value, substitute @PLAINNAME@ and @BASENAME@\n+ values['@PLAINNAME@'] = plain = os.path.split(inputs[0])[1]\n+ values['@BASENAME@'] = os.path.splitext(plain)[0]\n+ if outputs:\n+ # Gather values derived from the outputs, similar to above.\n+ values['@OUTPUT@'] = outputs\n+ for (ii, vv) in enumerate(outputs):\n+ values['@OUTPUT{}@'.format(ii)] = vv\n+ # Outdir should be the same for all outputs\n+ values['@OUTDIR@'] = os.path.split(outputs[0])[0]\n+ # Many external programs fail on empty arguments.\n+ if values['@OUTDIR@'] == '':\n+ values['@OUTDIR@'] = '.'\n+ return values\n", "new_path": "mesonbuild/mesonlib.py", "old_path": "mesonbuild/mesonlib.py" }, { "change_type": "MODIFY", "diff": "@@ -174,6 +174,157 @@ class InternalTests(unittest.TestCase):\n libdir = '/some/path/to/prefix/libdir'\n self.assertEqual(commonpath([prefix, libdir]), str(pathlib.PurePath(prefix)))\n \n+ def test_string_templates_substitution(self):\n+ dictfunc = mesonbuild.mesonlib.get_filenames_templates_dict\n+ substfunc = mesonbuild.mesonlib.substitute_values\n+ ME = mesonbuild.mesonlib.MesonException\n+\n+ # Identity\n+ self.assertEqual(dictfunc([], []), {})\n+\n+ # One input, no outputs\n+ inputs = ['bar/foo.c.in']\n+ outputs = []\n+ ret = dictfunc(inputs, outputs)\n+ d = {'@INPUT@': inputs, '@INPUT0@': inputs[0],\n+ '@PLAINNAME@': 'foo.c.in', '@BASENAME@': 'foo.c'}\n+ # Check dictionary\n+ self.assertEqual(ret, d)\n+ # Check substitutions\n+ cmd = ['some', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), cmd)\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), [inputs[0] + '.out'] + cmd[1:])\n+ cmd = ['@[email protected]', '@[email protected]', 'strings']\n+ self.assertEqual(substfunc(cmd, d),\n+ [inputs[0] + '.out'] + [d['@PLAINNAME@'] + '.ok'] + cmd[2:])\n+ cmd = ['@INPUT@', '@[email protected]', 'strings']\n+ self.assertEqual(substfunc(cmd, d),\n+ inputs + [d['@BASENAME@'] + '.hah'] + cmd[2:])\n+ cmd = ['@OUTPUT@']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+\n+ # One input, one output\n+ inputs = ['bar/foo.c.in']\n+ outputs = ['out.c']\n+ ret = dictfunc(inputs, outputs)\n+ d = {'@INPUT@': inputs, '@INPUT0@': inputs[0],\n+ '@PLAINNAME@': 'foo.c.in', '@BASENAME@': 'foo.c',\n+ '@OUTPUT@': outputs, '@OUTPUT0@': outputs[0], '@OUTDIR@': '.'}\n+ # Check dictionary\n+ self.assertEqual(ret, d)\n+ # Check substitutions\n+ cmd = ['some', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), cmd)\n+ cmd = ['@[email protected]', '@OUTPUT@', 'strings']\n+ self.assertEqual(substfunc(cmd, d),\n+ [inputs[0] + '.out'] + outputs + cmd[2:])\n+ cmd = ['@[email protected]', '@[email protected]', '@OUTPUT0@']\n+ self.assertEqual(substfunc(cmd, d),\n+ [inputs[0] + '.out', d['@PLAINNAME@'] + '.ok'] + outputs)\n+ cmd = ['@INPUT@', '@[email protected]', 'strings']\n+ self.assertEqual(substfunc(cmd, d),\n+ inputs + [d['@BASENAME@'] + '.hah'] + cmd[2:])\n+\n+ # One input, one output with a subdir\n+ outputs = ['dir/out.c']\n+ ret = dictfunc(inputs, outputs)\n+ d = {'@INPUT@': inputs, '@INPUT0@': inputs[0],\n+ '@PLAINNAME@': 'foo.c.in', '@BASENAME@': 'foo.c',\n+ '@OUTPUT@': outputs, '@OUTPUT0@': outputs[0], '@OUTDIR@': 'dir'}\n+ # Check dictionary\n+ self.assertEqual(ret, d)\n+\n+ # Two inputs, no outputs\n+ inputs = ['bar/foo.c.in', 'baz/foo.c.in']\n+ outputs = []\n+ ret = dictfunc(inputs, outputs)\n+ d = {'@INPUT@': inputs, '@INPUT0@': inputs[0], '@INPUT1@': inputs[1]}\n+ # Check dictionary\n+ self.assertEqual(ret, d)\n+ # Check substitutions\n+ cmd = ['some', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), cmd)\n+ cmd = ['@INPUT@', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), inputs + cmd[1:])\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), [inputs[0] + '.out'] + cmd[1:])\n+ cmd = ['@[email protected]', '@[email protected]', 'strings']\n+ self.assertEqual(substfunc(cmd, d), [inputs[0] + '.out', inputs[1] + '.ok'] + cmd[2:])\n+ cmd = ['@INPUT0@', '@INPUT1@', 'strings']\n+ self.assertEqual(substfunc(cmd, d), inputs + cmd[2:])\n+ # Many inputs, can't use @INPUT@ like this\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ # Not enough inputs\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+ # Too many inputs\n+ cmd = ['@PLAINNAME@']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+ cmd = ['@BASENAME@']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+ # No outputs\n+ cmd = ['@OUTPUT@']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+ cmd = ['@OUTPUT0@']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+ cmd = ['@OUTDIR@']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+\n+ # Two inputs, one output\n+ outputs = ['dir/out.c']\n+ ret = dictfunc(inputs, outputs)\n+ d = {'@INPUT@': inputs, '@INPUT0@': inputs[0], '@INPUT1@': inputs[1],\n+ '@OUTPUT@': outputs, '@OUTPUT0@': outputs[0], '@OUTDIR@': 'dir'}\n+ # Check dictionary\n+ self.assertEqual(ret, d)\n+ # Check substitutions\n+ cmd = ['some', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), cmd)\n+ cmd = ['@OUTPUT@', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), outputs + cmd[1:])\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), [outputs[0] + '.out'] + cmd[1:])\n+ cmd = ['@[email protected]', '@[email protected]', 'strings']\n+ self.assertEqual(substfunc(cmd, d), [outputs[0] + '.out', inputs[1] + '.ok'] + cmd[2:])\n+ # Many inputs, can't use @INPUT@ like this\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ # Not enough inputs\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+ # Not enough outputs\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+\n+ # Two inputs, two outputs\n+ outputs = ['dir/out.c', 'dir/out2.c']\n+ ret = dictfunc(inputs, outputs)\n+ d = {'@INPUT@': inputs, '@INPUT0@': inputs[0], '@INPUT1@': inputs[1],\n+ '@OUTPUT@': outputs, '@OUTPUT0@': outputs[0], '@OUTPUT1@': outputs[1],\n+ '@OUTDIR@': 'dir'}\n+ # Check dictionary\n+ self.assertEqual(ret, d)\n+ # Check substitutions\n+ cmd = ['some', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), cmd)\n+ cmd = ['@OUTPUT@', 'ordinary', 'strings']\n+ self.assertEqual(substfunc(cmd, d), outputs + cmd[1:])\n+ cmd = ['@OUTPUT0@', '@OUTPUT1@', 'strings']\n+ self.assertEqual(substfunc(cmd, d), outputs + cmd[2:])\n+ cmd = ['@[email protected]', '@[email protected]', '@OUTDIR@']\n+ self.assertEqual(substfunc(cmd, d), [outputs[0] + '.out', inputs[1] + '.ok', 'dir'])\n+ # Many inputs, can't use @INPUT@ like this\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ # Not enough inputs\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+ # Not enough outputs\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+ # Many outputs, can't use @OUTPUT@ like this\n+ cmd = ['@[email protected]', 'ordinary', 'strings']\n+ self.assertRaises(ME, substfunc, cmd, d)\n+\n \n class LinuxlikeTests(unittest.TestCase):\n def setUp(self):\n", "new_path": "run_unittests.py", "old_path": "run_unittests.py" } ]
003e0a0610582020d1b213e0c8d16fe63bc6eabe
mesonbuild/meson
20.02.2017 07:06:13
Apache License 2.0
Use the same function for detection of C and C++ compilers The mechanism is identical which means there's a high likelihood of unintended divergence. In fact, a slight divergence was already there.
[ { "change_type": "MODIFY", "diff": "@@ -400,9 +400,9 @@ class Environment:\n errmsg += '\\nRunning \"{0}\" gave \"{1}\"'.format(c, e)\n raise EnvironmentException(errmsg)\n \n- def detect_c_compiler(self, want_cross):\n+ def _detect_c_or_cpp_compiler(self, lang, evar, want_cross):\n popen_exceptions = {}\n- compilers, ccache, is_cross, exe_wrap = self._get_compilers('c', 'CC', want_cross)\n+ compilers, ccache, is_cross, exe_wrap = self._get_compilers(lang, evar, want_cross)\n for compiler in compilers:\n if isinstance(compiler, str):\n compiler = [compiler]\n@@ -424,24 +424,34 @@ class Environment:\n continue\n gtype = self.get_gnu_compiler_type(defines)\n version = self.get_gnu_version_from_defines(defines)\n- return GnuCCompiler(ccache + compiler, version, gtype, is_cross, exe_wrap, defines)\n+ cls = GnuCCompiler if lang == 'c' else GnuCPPCompiler\n+ return cls(ccache + compiler, version, gtype, is_cross, exe_wrap, defines)\n if 'clang' in out:\n if 'Apple' in out or for_darwin(want_cross, self):\n cltype = CLANG_OSX\n else:\n cltype = CLANG_STANDARD\n- return ClangCCompiler(ccache + compiler, version, cltype, is_cross, exe_wrap)\n+ cls = ClangCCompiler if lang == 'c' else ClangCPPCompiler\n+ return cls(ccache + compiler, version, cltype, is_cross, exe_wrap)\n if 'Microsoft' in out or 'Microsoft' in err:\n # Visual Studio prints version number to stderr but\n # everything else to stdout. Why? Lord only knows.\n version = search_version(err)\n- return VisualStudioCCompiler(compiler, version, is_cross, exe_wrap)\n+ cls = VisualStudioCCompiler if lang == 'c' else VisualStudioCPPCompiler\n+ return cls(compiler, version, is_cross, exe_wrap)\n if '(ICC)' in out:\n # TODO: add microsoft add check OSX\n inteltype = ICC_STANDARD\n- return IntelCCompiler(ccache + compiler, version, inteltype, is_cross, exe_wrap)\n+ cls = IntelCCompiler if lang == 'c' else IntelCPPCompiler\n+ return cls(ccache + compiler, version, inteltype, is_cross, exe_wrap)\n self._handle_compiler_exceptions(popen_exceptions, compilers)\n \n+ def detect_c_compiler(self, want_cross):\n+ return self._detect_c_or_cpp_compiler('c', 'CC', want_cross)\n+\n+ def detect_cpp_compiler(self, want_cross):\n+ return self._detect_c_or_cpp_compiler('cpp', 'CXX', want_cross)\n+\n def detect_fortran_compiler(self, want_cross):\n popen_exceptions = {}\n compilers, ccache, is_cross, exe_wrap = self._get_compilers('fortran', 'FC', want_cross)\n@@ -496,46 +506,6 @@ class Environment:\n path = os.path.split(__file__)[0]\n return os.path.join(path, 'depfixer.py')\n \n- def detect_cpp_compiler(self, want_cross):\n- popen_exceptions = {}\n- compilers, ccache, is_cross, exe_wrap = self._get_compilers('cpp', 'CXX', want_cross)\n- for compiler in compilers:\n- if isinstance(compiler, str):\n- compiler = [compiler]\n- basename = os.path.basename(compiler[-1]).lower()\n- if basename == 'cl' or basename == 'cl.exe':\n- arg = '/?'\n- else:\n- arg = '--version'\n- try:\n- p, out, err = Popen_safe(compiler + [arg])\n- except OSError as e:\n- popen_exceptions[' '.join(compiler + [arg])] = e\n- continue\n- version = search_version(out)\n- if 'Free Software Foundation' in out:\n- defines = self.get_gnu_compiler_defines(compiler)\n- if not defines:\n- popen_exceptions[compiler] = 'no pre-processor defines'\n- continue\n- gtype = self.get_gnu_compiler_type(defines)\n- version = self.get_gnu_version_from_defines(defines)\n- return GnuCPPCompiler(ccache + compiler, version, gtype, is_cross, exe_wrap, defines)\n- if 'clang' in out:\n- if 'Apple' in out:\n- cltype = CLANG_OSX\n- else:\n- cltype = CLANG_STANDARD\n- return ClangCPPCompiler(ccache + compiler, version, cltype, is_cross, exe_wrap)\n- if 'Microsoft' in out or 'Microsoft' in err:\n- version = search_version(err)\n- return VisualStudioCPPCompiler(compiler, version, is_cross, exe_wrap)\n- if '(ICC)' in out:\n- # TODO: add microsoft add check OSX\n- inteltype = ICC_STANDARD\n- return IntelCPPCompiler(ccache + compiler, version, inteltype, is_cross, exe_wrap)\n- self._handle_compiler_exceptions(popen_exceptions, compilers)\n-\n def detect_objc_compiler(self, want_cross):\n popen_exceptions = {}\n compilers, ccache, is_cross, exe_wrap = self._get_compilers('objc', 'OBJC', want_cross)\n", "new_path": "mesonbuild/environment.py", "old_path": "mesonbuild/environment.py" } ]
1fbf6300c5d38b12a4347a9327e54a9a315ef8de
mesonbuild/meson
10.04.2017 23:36:06
Apache License 2.0
Use an enum instead of strings for method names. If a non-string value is passed as a method, reject this explicitly with a clear error message rather than trying to match with it and failing.
[ { "change_type": "MODIFY", "diff": "@@ -24,6 +24,7 @@ import sys\n import os, stat, glob, shutil\n import subprocess\n import sysconfig\n+from enum import Enum\n from collections import OrderedDict\n from . mesonlib import MesonException, version_compare, version_compare_many, Popen_safe\n from . import mlog\n@@ -33,21 +34,35 @@ from .environment import detect_cpu_family, for_windows\n class DependencyException(MesonException):\n '''Exceptions raised while trying to find dependencies'''\n \n+class DependencyMethods(Enum):\n+ # Auto means to use whatever dependency checking mechanisms in whatever order meson thinks is best.\n+ AUTO = 'auto'\n+ PKGCONFIG = 'pkg-config'\n+ QMAKE = 'qmake'\n+ # Just specify the standard link arguments, assuming the operating system provides the library.\n+ SYSTEM = 'system'\n+ # Detect using sdl2-config\n+ SDLCONFIG = 'sdlconfig'\n+ # This is only supported on OSX - search the frameworks directory by name.\n+ EXTRAFRAMEWORK = 'extraframework'\n+ # Detect using the sysconfig module.\n+ SYSCONFIG = 'sysconfig'\n+\n class Dependency:\n def __init__(self, type_name, kwargs):\n self.name = \"null\"\n self.is_found = False\n self.type_name = type_name\n- method = kwargs.get('method', 'auto')\n+ method = DependencyMethods(kwargs.get('method', 'auto'))\n \n # Set the detection method. If the method is set to auto, use any available method.\n # If method is set to a specific string, allow only that detection method.\n- if method == \"auto\":\n+ if method == DependencyMethods.AUTO:\n self.methods = self.get_methods()\n elif method in self.get_methods():\n self.methods = [method]\n else:\n- raise MesonException('Unsupported detection method: {}, allowed methods are {}'.format(method, mlog.format_list([\"auto\"] + self.get_methods())))\n+ raise MesonException('Unsupported detection method: {}, allowed methods are {}'.format(method.value, mlog.format_list(map(lambda x: x.value, [DependencyMethods.AUTO] + self.get_methods()))))\n \n def __repr__(self):\n s = '<{0} {1}: {2}>'\n@@ -68,7 +83,7 @@ class Dependency:\n return []\n \n def get_methods(self):\n- return ['auto']\n+ return [DependencyMethods.AUTO]\n \n def get_name(self):\n return self.name\n@@ -268,7 +283,7 @@ class PkgConfigDependency(Dependency):\n return self.libs\n \n def get_methods(self):\n- return ['pkg-config']\n+ return [DependencyMethods.PKGCONFIG]\n \n def check_pkgconfig(self):\n evar = 'PKG_CONFIG'\n@@ -985,10 +1000,10 @@ class QtBaseDependency(Dependency):\n # Keep track of the detection methods used, for logging purposes.\n methods = []\n # Prefer pkg-config, then fallback to `qmake -query`\n- if 'pkg-config' in self.methods:\n+ if DependencyMethods.PKGCONFIG in self.methods:\n self._pkgconfig_detect(mods, env, kwargs)\n methods.append('pkgconfig')\n- if not self.is_found and 'qmake' in self.methods:\n+ if not self.is_found and DependencyMethods.QMAKE in self.methods:\n from_text = self._qmake_detect(mods, env, kwargs)\n methods.append('qmake-' + self.name)\n methods.append('qmake')\n@@ -1137,7 +1152,7 @@ class QtBaseDependency(Dependency):\n return self.largs\n \n def get_methods(self):\n- return ['pkg-config', 'qmake']\n+ return [DependencyMethods.PKGCONFIG, DependencyMethods.QMAKE]\n \n def found(self):\n return self.is_found\n@@ -1301,7 +1316,7 @@ class GLDependency(Dependency):\n self.is_found = False\n self.cargs = []\n self.linkargs = []\n- if 'pkg-config' in self.methods:\n+ if DependencyMethods.PKGCONFIG in self.methods:\n try:\n pcdep = PkgConfigDependency('gl', environment, kwargs)\n if pcdep.found():\n@@ -1313,7 +1328,7 @@ class GLDependency(Dependency):\n return\n except Exception:\n pass\n- if 'system' in self.methods:\n+ if DependencyMethods.SYSTEM in self.methods:\n if mesonlib.is_osx():\n self.is_found = True\n self.linkargs = ['-framework', 'OpenGL']\n@@ -1333,9 +1348,9 @@ class GLDependency(Dependency):\n \n def get_methods(self):\n if mesonlib.is_osx() or mesonlib.is_windows():\n- return ['pkg-config', 'system']\n+ return [DependencyMethods.PKGCONFIG, DependencyMethods.SYSTEM]\n else:\n- return ['pkg-config']\n+ return [DependencyMethods.PKGCONFIG]\n \n # There are three different ways of depending on SDL2:\n # sdl2-config, pkg-config and OSX framework\n@@ -1345,7 +1360,7 @@ class SDL2Dependency(Dependency):\n self.is_found = False\n self.cargs = []\n self.linkargs = []\n- if 'pkg-config' in self.methods:\n+ if DependencyMethods.PKGCONFIG in self.methods:\n try:\n pcdep = PkgConfigDependency('sdl2', environment, kwargs)\n if pcdep.found():\n@@ -1358,7 +1373,7 @@ class SDL2Dependency(Dependency):\n except Exception as e:\n mlog.debug('SDL 2 not found via pkgconfig. Trying next, error was:', str(e))\n pass\n- if 'sdlconfig' in self.methods:\n+ if DependencyMethods.SDLCONFIG in self.methods:\n sdlconf = shutil.which('sdl2-config')\n if sdlconf:\n stdo = Popen_safe(['sdl2-config', '--cflags'])[1]\n@@ -1372,7 +1387,7 @@ class SDL2Dependency(Dependency):\n self.version, '(%s)' % sdlconf)\n return\n mlog.debug('Could not find sdl2-config binary, trying next.')\n- if 'extraframework' in self.methods:\n+ if DependencyMethods.EXTRAFRAMEWORK in self.methods:\n if mesonlib.is_osx():\n fwdep = ExtraFrameworkDependency('sdl2', kwargs.get('required', True), None, kwargs)\n if fwdep.found():\n@@ -1397,9 +1412,9 @@ class SDL2Dependency(Dependency):\n \n def get_methods(self):\n if mesonlib.is_osx():\n- return ['pkg-config', 'sdlconfig', 'extraframework']\n+ return [DependencyMethods.PKGCONFIG, DependencyMethods.SDLCONFIG, DependencyMethods.EXTRAFRAMEWORK]\n else:\n- return ['pkg-config', 'sdlconfig']\n+ return [DependencyMethods.PKGCONFIG, DependencyMethods.SDLCONFIG]\n \n class ExtraFrameworkDependency(Dependency):\n def __init__(self, name, required, path, kwargs):\n@@ -1465,7 +1480,7 @@ class Python3Dependency(Dependency):\n self.is_found = False\n # We can only be sure that it is Python 3 at this point\n self.version = '3'\n- if 'pkg-config' in self.methods:\n+ if DependencyMethods.PKGCONFIG in self.methods:\n try:\n pkgdep = PkgConfigDependency('python3', environment, kwargs)\n if pkgdep.found():\n@@ -1477,9 +1492,9 @@ class Python3Dependency(Dependency):\n except Exception:\n pass\n if not self.is_found:\n- if mesonlib.is_windows() and 'sysconfig' in self.methods:\n+ if mesonlib.is_windows() and DependencyMethods.SYSCONFIG in self.methods:\n self._find_libpy3_windows(environment)\n- elif mesonlib.is_osx() and 'extraframework' in self.methods:\n+ elif mesonlib.is_osx() and DependencyMethods.EXTRAFRAMEWORK in self.methods:\n # In OSX the Python 3 framework does not have a version\n # number in its name.\n fw = ExtraFrameworkDependency('python', False, None, kwargs)\n@@ -1536,11 +1551,11 @@ class Python3Dependency(Dependency):\n \n def get_methods(self):\n if mesonlib.is_windows():\n- return ['pkg-config', 'sysconfig']\n+ return [DependencyMethods.PKGCONFIG, DependencyMethods.SYSCONFIG]\n elif mesonlib.is_osx():\n- return ['pkg-config', 'extraframework']\n+ return [DependencyMethods.PKGCONFIG, DependencyMethods.EXTRAFRAMEWORK]\n else:\n- return ['pkg-config']\n+ return [DependencyMethods.PKGCONFIG]\n \n def get_version(self):\n return self.version\n@@ -1574,6 +1589,8 @@ def find_external_dependency(name, environment, kwargs):\n required = kwargs.get('required', True)\n if not isinstance(required, bool):\n raise DependencyException('Keyword \"required\" must be a boolean.')\n+ if not isinstance(kwargs.get('method', ''), str):\n+ raise DependencyException('Keyword \"method\" must be a string.')\n lname = name.lower()\n if lname in packages:\n dep = packages[lname](environment, kwargs)\n", "new_path": "mesonbuild/dependencies.py", "old_path": "mesonbuild/dependencies.py" } ]
fab5634916191816ddecf1a2a958fa7ed2eac1ec
mesonbuild/meson
24.06.2017 20:16:30
Apache License 2.0
Add 'Compiler.get_display_language' Use this when we print language-related information to the console and via the Ninja backend.
[ { "change_type": "MODIFY", "diff": "@@ -1606,7 +1606,7 @@ rule FORTRAN_DEP_HACK\n output_args=' '.join(compiler.get_output_args('$out')),\n compile_only_args=' '.join(compiler.get_compile_only_args())\n )\n- description = ' description = Compiling %s object $out.\\n' % langname.title()\n+ description = ' description = Compiling %s object $out.\\n' % compiler.get_display_language()\n if compiler.get_id() == 'msvc':\n deps = ' deps = msvc\\n'\n else:\n", "new_path": "mesonbuild/backend/ninjabackend.py", "old_path": "mesonbuild/backend/ninjabackend.py" }, { "change_type": "MODIFY", "diff": "@@ -179,7 +179,7 @@ class CCompiler(Compiler):\n return ['-Wl,--out-implib=' + implibname]\n \n def sanity_check_impl(self, work_dir, environment, sname, code):\n- mlog.debug('Sanity testing ' + self.language + ' compiler:', ' '.join(self.exelist))\n+ mlog.debug('Sanity testing ' + self.get_display_language() + ' compiler:', ' '.join(self.exelist))\n mlog.debug('Is cross compiler: %s.' % str(self.is_cross))\n \n extra_flags = []\n", "new_path": "mesonbuild/compilers/c.py", "old_path": "mesonbuild/compilers/c.py" }, { "change_type": "MODIFY", "diff": "@@ -584,6 +584,9 @@ class Compiler:\n def get_language(self):\n return self.language\n \n+ def get_display_language(self):\n+ return self.language.capitalize()\n+\n def get_default_suffix(self):\n return self.default_suffix\n \n", "new_path": "mesonbuild/compilers/compilers.py", "old_path": "mesonbuild/compilers/compilers.py" }, { "change_type": "MODIFY", "diff": "@@ -32,6 +32,9 @@ class CPPCompiler(CCompiler):\n self.language = 'cpp'\n CCompiler.__init__(self, exelist, version, is_cross, exe_wrap)\n \n+ def get_display_language(self):\n+ return 'C++'\n+\n def get_no_stdinc_args(self):\n return ['-nostdinc++']\n \n", "new_path": "mesonbuild/compilers/cpp.py", "old_path": "mesonbuild/compilers/cpp.py" }, { "change_type": "MODIFY", "diff": "@@ -25,6 +25,9 @@ class MonoCompiler(Compiler):\n self.id = 'mono'\n self.monorunner = 'mono'\n \n+ def get_display_language(self):\n+ return 'C#'\n+\n def get_output_args(self, fname):\n return ['-out:' + fname]\n \n", "new_path": "mesonbuild/compilers/cs.py", "old_path": "mesonbuild/compilers/cs.py" }, { "change_type": "MODIFY", "diff": "@@ -24,6 +24,9 @@ class ObjCCompiler(CCompiler):\n self.language = 'objc'\n CCompiler.__init__(self, exelist, version, is_cross, exe_wrap)\n \n+ def get_display_language(self):\n+ return 'Objective-C'\n+\n def sanity_check(self, work_dir, environment):\n # TODO try to use sanity_check_impl instead of duplicated code\n source_name = os.path.join(work_dir, 'sanitycheckobjc.m')\n", "new_path": "mesonbuild/compilers/objc.py", "old_path": "mesonbuild/compilers/objc.py" }, { "change_type": "MODIFY", "diff": "@@ -24,6 +24,9 @@ class ObjCPPCompiler(CPPCompiler):\n self.language = 'objcpp'\n CPPCompiler.__init__(self, exelist, version, is_cross, exe_wrap)\n \n+ def get_display_language(self):\n+ return 'Objective-C++'\n+\n def sanity_check(self, work_dir, environment):\n # TODO try to use sanity_check_impl instead of duplicated code\n source_name = os.path.join(work_dir, 'sanitycheckobjcpp.mm')\n", "new_path": "mesonbuild/compilers/objcpp.py", "old_path": "mesonbuild/compilers/objcpp.py" }, { "change_type": "MODIFY", "diff": "@@ -741,7 +741,7 @@ class CompilerHolder(InterpreterObject):\n def unittest_args_method(self, args, kwargs):\n # At time, only D compilers have this feature.\n if not hasattr(self.compiler, 'get_unittest_args'):\n- raise InterpreterException('This {} compiler has no unittest arguments.'.format(self.compiler.language))\n+ raise InterpreterException('This {} compiler has no unittest arguments.'.format(self.compiler.get_display_language()))\n return self.compiler.get_unittest_args()\n \n def has_member_method(self, args, kwargs):\n@@ -971,8 +971,7 @@ class CompilerHolder(InterpreterObject):\n raise InvalidCode('Search directory %s is not an absolute path.' % i)\n linkargs = self.compiler.find_library(libname, self.environment, search_dirs)\n if required and not linkargs:\n- l = self.compiler.language.capitalize()\n- raise InterpreterException('{} library {!r} not found'.format(l, libname))\n+ raise InterpreterException('{} library {!r} not found'.format(self.compiler.get_display_language(), libname))\n lib = dependencies.ExternalLibrary(libname, linkargs, self.environment,\n self.compiler.language)\n return ExternalLibraryHolder(lib)\n@@ -986,7 +985,7 @@ class CompilerHolder(InterpreterObject):\n h = mlog.green('YES')\n else:\n h = mlog.red('NO')\n- mlog.log('Compiler for {} supports argument {}:'.format(self.compiler.language, args[0]), h)\n+ mlog.log('Compiler for {} supports argument {}:'.format(self.compiler.get_display_language(), args[0]), h)\n return result\n \n def has_multi_arguments_method(self, args, kwargs):\n@@ -998,7 +997,7 @@ class CompilerHolder(InterpreterObject):\n h = mlog.red('NO')\n mlog.log(\n 'Compiler for {} supports arguments {}:'.format(\n- self.compiler.language, ' '.join(args)),\n+ self.compiler.get_display_language(), ' '.join(args)),\n h)\n return result\n \n@@ -1794,7 +1793,7 @@ class Interpreter(InterpreterBase):\n continue\n else:\n raise\n- mlog.log('Native %s compiler: ' % lang, mlog.bold(' '.join(comp.get_exelist())), ' (%s %s)' % (comp.id, comp.version), sep='')\n+ mlog.log('Native %s compiler: ' % comp.get_display_language(), mlog.bold(' '.join(comp.get_exelist())), ' (%s %s)' % (comp.id, comp.version), sep='')\n if not comp.get_language() in self.coredata.external_args:\n (preproc_args, compile_args, link_args) = environment.get_args_from_envvars(comp)\n self.coredata.external_preprocess_args[comp.get_language()] = preproc_args\n@@ -1802,7 +1801,7 @@ class Interpreter(InterpreterBase):\n self.coredata.external_link_args[comp.get_language()] = link_args\n self.build.add_compiler(comp)\n if need_cross_compiler:\n- mlog.log('Cross %s compiler: ' % lang, mlog.bold(' '.join(cross_comp.get_exelist())), ' (%s %s)' % (cross_comp.id, cross_comp.version), sep='')\n+ mlog.log('Cross %s compiler: ' % cross_comp.get_display_language(), mlog.bold(' '.join(cross_comp.get_exelist())), ' (%s %s)' % (cross_comp.id, cross_comp.version), sep='')\n self.build.add_cross_compiler(cross_comp)\n if self.environment.is_cross_build() and not need_cross_compiler:\n self.build.add_cross_compiler(comp)\n", "new_path": "mesonbuild/interpreter.py", "old_path": "mesonbuild/interpreter.py" } ]
cda0e33650341f0a82c7d4164607fd74805e670f
mesonbuild/meson
18.10.2017 22:39:05
Apache License 2.0
Add ConfigToolDependency class This class is meant abstract away some of the tedium of writing a config tool wrapper dependency, and allow these instances to share some basic code that they all need.
[ { "change_type": "MODIFY", "diff": "@@ -24,7 +24,9 @@ from enum import Enum\n \n from .. import mlog\n from .. import mesonlib\n-from ..mesonlib import MesonException, Popen_safe, version_compare_many, listify\n+from ..mesonlib import (\n+ MesonException, Popen_safe, version_compare_many, version_compare, listify\n+)\n \n \n # These must be defined in this file to avoid cyclical references.\n@@ -55,6 +57,8 @@ class DependencyMethods(Enum):\n EXTRAFRAMEWORK = 'extraframework'\n # Detect using the sysconfig module.\n SYSCONFIG = 'sysconfig'\n+ # Specify using a \"program\"-config style tool\n+ CONFIG_TOOL = 'config-tool'\n \n \n class Dependency:\n@@ -167,6 +171,94 @@ class ExternalDependency(Dependency):\n return self.compiler\n \n \n+class ConfigToolDependency(ExternalDependency):\n+\n+ \"\"\"Class representing dependencies found using a config tool.\"\"\"\n+\n+ tools = None\n+ tool_name = None\n+\n+ def __init__(self, name, environment, language, kwargs):\n+ super().__init__('config-tool', environment, language, kwargs)\n+ self.name = name\n+ self.tools = listify(kwargs.get('tools', self.tools))\n+\n+ req_version = kwargs.get('version', None)\n+ tool, version = self.find_config(req_version)\n+ self.config = tool\n+ self.is_found = self.report_config(version, req_version)\n+ if not self.is_found:\n+ self.config = None\n+ return\n+ self.version = version\n+\n+ def find_config(self, versions=None):\n+ \"\"\"Helper method that searchs for config tool binaries in PATH and\n+ returns the one that best matches the given version requirements.\n+ \"\"\"\n+ if not isinstance(versions, list) and versions is not None:\n+ versions = listify(versions)\n+\n+ best_match = (None, None)\n+ for tool in self.tools:\n+ try:\n+ p, out = Popen_safe([tool, '--version'])[:2]\n+ except (FileNotFoundError, PermissionError):\n+ continue\n+ if p.returncode != 0:\n+ continue\n+\n+ out = out.strip()\n+ # Some tools, like pcap-config don't supply a version, but also\n+ # dont fail with --version, in that case just assume that there is\n+ # only one verison and return it.\n+ if not out:\n+ return (tool, 'none')\n+ if versions:\n+ is_found = version_compare_many(out, versions)[0]\n+ # This allows returning a found version without a config tool,\n+ # which is useful to inform the user that you found version x,\n+ # but y was required.\n+ if not is_found:\n+ tool = None\n+ if best_match[1]:\n+ if version_compare(out, '> {}'.format(best_match[1])):\n+ best_match = (tool, out)\n+ else:\n+ best_match = (tool, out)\n+\n+ return best_match\n+\n+ def report_config(self, version, req_version):\n+ \"\"\"Helper method to print messages about the tool.\"\"\"\n+ if self.config is None:\n+ if version is not None:\n+ mlog.log('found {} {!r} but need:'.format(self.tool_name, version),\n+ req_version)\n+ else:\n+ mlog.log(\"No {} found; can't detect dependency\".format(self.tool_name))\n+ mlog.log('Dependency {} found:'.format(self.name), mlog.red('NO'))\n+ if self.required:\n+ raise DependencyException('Dependency {} not found'.format(self.name))\n+ return False\n+ mlog.log('Found {}:'.format(self.tool_name), mlog.bold(shutil.which(self.config)),\n+ '({})'.format(version))\n+ mlog.log('Dependency {} found:'.format(self.name), mlog.green('YES'))\n+ return True\n+\n+ def get_config_value(self, args, stage):\n+ p, out, _ = Popen_safe([self.config] + args)\n+ if p.returncode != 0:\n+ if self.required:\n+ raise DependencyException('Could not generate {} for {}'.format(\n+ stage, self.name))\n+ return []\n+ return shlex.split(out)\n+\n+ def get_methods(self):\n+ return [DependencyMethods.AUTO, DependencyMethods.CONFIG_TOOL]\n+\n+\n class PkgConfigDependency(ExternalDependency):\n # The class's copy of the pkg-config path. Avoids having to search for it\n # multiple times in the same Meson invocation.\n", "new_path": "mesonbuild/dependencies/base.py", "old_path": "mesonbuild/dependencies/base.py" } ]
cf98f5e3705603ae21bef9b0a577bcd001a8c92e
mesonbuild/meson
21.02.2018 13:39:52
Apache License 2.0
Enable searching system crossfile locations on more platforms There's no reason not to also look in these places on Cygwin or OSX. Don't do this on Windows, as these paths aren't meaningful there. Move test_cross_file_system_paths from LinuxlikeTests to AllPlatformTests.
[ { "change_type": "MODIFY", "diff": "@@ -222,17 +222,17 @@ class CoreData:\n (after resolving variables and ~), return that absolute path. Next,\n check if the file is relative to the current source dir. If the path\n still isn't resolved do the following:\n- Linux + BSD:\n+ Windows:\n+ - Error\n+ *:\n - $XDG_DATA_HOME/meson/cross (or ~/.local/share/meson/cross if\n undefined)\n - $XDG_DATA_DIRS/meson/cross (or\n /usr/local/share/meson/cross:/usr/share/meson/cross if undefined)\n - Error\n- *:\n- - Error\n- BSD follows the Linux path and will honor XDG_* if set. This simplifies\n- the implementation somewhat, especially since most BSD users wont set\n- those environment variables.\n+\n+ Non-Windows follows the Linux path and will honor XDG_* if set. This\n+ simplifies the implementation somewhat.\n \"\"\"\n if filename is None:\n return None\n@@ -242,7 +242,7 @@ class CoreData:\n path_to_try = os.path.abspath(filename)\n if os.path.exists(path_to_try):\n return path_to_try\n- if sys.platform == 'linux' or 'bsd' in sys.platform.lower():\n+ if sys.platform != 'win32':\n paths = [\n os.environ.get('XDG_DATA_HOME', os.path.expanduser('~/.local/share')),\n ] + os.environ.get('XDG_DATA_DIRS', '/usr/local/share:/usr/share').split(':')\n", "new_path": "mesonbuild/coredata.py", "old_path": "mesonbuild/coredata.py" }, { "change_type": "MODIFY", "diff": "@@ -1749,6 +1749,53 @@ int main(int argc, char **argv) {\n self._run(ninja,\n workdir=os.path.join(tmpdir, 'builddir'))\n \n+ def test_cross_file_system_paths(self):\n+ if is_windows():\n+ raise unittest.SkipTest('system crossfile paths not defined for Windows (yet)')\n+\n+ testdir = os.path.join(self.common_test_dir, '1 trivial')\n+ cross_content = textwrap.dedent(\"\"\"\\\n+ [binaries]\n+ c = '/usr/bin/cc'\n+ ar = '/usr/bin/ar'\n+ strip = '/usr/bin/ar'\n+\n+ [properties]\n+\n+ [host_machine]\n+ system = 'linux'\n+ cpu_family = 'x86'\n+ cpu = 'i686'\n+ endian = 'little'\n+ \"\"\")\n+\n+ with tempfile.TemporaryDirectory() as d:\n+ dir_ = os.path.join(d, 'meson', 'cross')\n+ os.makedirs(dir_)\n+ with tempfile.NamedTemporaryFile('w', dir=dir_, delete=False) as f:\n+ f.write(cross_content)\n+ name = os.path.basename(f.name)\n+\n+ with mock.patch.dict(os.environ, {'XDG_DATA_HOME': d}):\n+ self.init(testdir, ['--cross-file=' + name], inprocess=True)\n+ self.wipe()\n+\n+ with mock.patch.dict(os.environ, {'XDG_DATA_DIRS': d}):\n+ os.environ.pop('XDG_DATA_HOME', None)\n+ self.init(testdir, ['--cross-file=' + name], inprocess=True)\n+ self.wipe()\n+\n+ with tempfile.TemporaryDirectory() as d:\n+ dir_ = os.path.join(d, '.local', 'share', 'meson', 'cross')\n+ os.makedirs(dir_)\n+ with tempfile.NamedTemporaryFile('w', dir=dir_, delete=False) as f:\n+ f.write(cross_content)\n+ name = os.path.basename(f.name)\n+\n+ with mock.patch('mesonbuild.coredata.os.path.expanduser', lambda x: x.replace('~', d)):\n+ self.init(testdir, ['--cross-file=' + name], inprocess=True)\n+ self.wipe()\n+\n \n class FailureTests(BasePlatformTests):\n '''\n@@ -2546,50 +2593,6 @@ endian = 'little'\n self.init(testdir, ['-Db_lto=true'], default_args=False)\n self.build('reconfigure')\n \n- def test_cross_file_system_paths(self):\n- testdir = os.path.join(self.common_test_dir, '1 trivial')\n- cross_content = textwrap.dedent(\"\"\"\\\n- [binaries]\n- c = '/usr/bin/cc'\n- ar = '/usr/bin/ar'\n- strip = '/usr/bin/ar'\n-\n- [properties]\n-\n- [host_machine]\n- system = 'linux'\n- cpu_family = 'x86'\n- cpu = 'i686'\n- endian = 'little'\n- \"\"\")\n-\n- with tempfile.TemporaryDirectory() as d:\n- dir_ = os.path.join(d, 'meson', 'cross')\n- os.makedirs(dir_)\n- with tempfile.NamedTemporaryFile('w', dir=dir_, delete=False) as f:\n- f.write(cross_content)\n- name = os.path.basename(f.name)\n-\n- with mock.patch.dict(os.environ, {'XDG_DATA_HOME': d}):\n- self.init(testdir, ['--cross-file=' + name], inprocess=True)\n- self.wipe()\n-\n- with mock.patch.dict(os.environ, {'XDG_DATA_DIRS': d}):\n- os.environ.pop('XDG_DATA_HOME', None)\n- self.init(testdir, ['--cross-file=' + name], inprocess=True)\n- self.wipe()\n-\n- with tempfile.TemporaryDirectory() as d:\n- dir_ = os.path.join(d, '.local', 'share', 'meson', 'cross')\n- os.makedirs(dir_)\n- with tempfile.NamedTemporaryFile('w', dir=dir_, delete=False) as f:\n- f.write(cross_content)\n- name = os.path.basename(f.name)\n-\n- with mock.patch('mesonbuild.coredata.os.path.expanduser', lambda x: x.replace('~', d)):\n- self.init(testdir, ['--cross-file=' + name], inprocess=True)\n- self.wipe()\n-\n def test_vala_generated_source_buildir_inside_source_tree(self):\n '''\n Test that valac outputs generated C files in the expected location when\n", "new_path": "run_unittests.py", "old_path": "run_unittests.py" } ]
ea3b54d40252fcb87eb1852223f125398b1edbdf
mesonbuild/meson
25.02.2018 15:49:58
Apache License 2.0
Use include_directories for D impdirs. Change the code to store D properties as plain data. Only convert them to compiler flags in the backend. This also means we can fully parse D arguments without needing to know the compiler being used.
[ { "change_type": "MODIFY", "diff": "@@ -2257,6 +2257,9 @@ rule FORTRAN_DEP_HACK\n depelem.write(outfile)\n commands += compiler.get_module_outdir_args(self.get_target_private_dir(target))\n \n+ if compiler.language == 'd':\n+ commands += compiler.get_feature_args(target.d_features, self.build_to_src)\n+\n element = NinjaBuildElement(self.all_outputs, rel_obj, compiler_name, rel_src)\n for d in header_deps:\n if isinstance(d, File):\n", "new_path": "mesonbuild/backend/ninjabackend.py", "old_path": "mesonbuild/backend/ninjabackend.py" }, { "change_type": "MODIFY", "diff": "@@ -355,6 +355,7 @@ class BuildTarget(Target):\n self.extra_args = {}\n self.generated = []\n self.extra_files = []\n+ self.d_features = {}\n # Sources can be:\n # 1. Pre-existing source files in the source tree\n # 2. Pre-existing sources generated by configure_file in the build tree\n@@ -682,12 +683,15 @@ just like those detected with the dependency() function.''')\n dfeature_versions = kwargs.get('d_module_versions', None)\n if dfeature_versions:\n dfeatures['versions'] = dfeature_versions\n- dfeature_import_dirs = kwargs.get('d_import_dirs', None)\n- if dfeature_import_dirs:\n+ if 'd_import_dirs' in kwargs:\n+ dfeature_import_dirs = extract_as_list(kwargs, 'd_import_dirs', unholder=True)\n+ for d in dfeature_import_dirs:\n+ if not isinstance(d, IncludeDirs):\n+ raise InvalidArguments('Arguments to d_import_dirs must be include_directories.')\n dfeatures['import_dirs'] = dfeature_import_dirs\n if dfeatures:\n if 'd' in self.compilers:\n- self.add_compiler_args('d', self.compilers['d'].get_feature_args(dfeatures))\n+ self.d_features = dfeatures\n \n self.link_args = extract_as_list(kwargs, 'link_args')\n for i in self.link_args:\n", "new_path": "mesonbuild/build.py", "old_path": "mesonbuild/build.py" }, { "change_type": "MODIFY", "diff": "@@ -93,7 +93,7 @@ class DCompiler(Compiler):\n # FIXME: Make this work for Windows, MacOS and cross-compiling\n return get_gcc_soname_args(GCC_STANDARD, prefix, shlib_name, suffix, path, soversion, is_shared_module)\n \n- def get_feature_args(self, kwargs):\n+ def get_feature_args(self, kwargs, build_to_src):\n res = []\n if 'unittest' in kwargs:\n unittest = kwargs.pop('unittest')\n@@ -122,8 +122,16 @@ class DCompiler(Compiler):\n import_dir_arg = d_feature_args[self.id]['import_dir']\n if not import_dir_arg:\n raise EnvironmentException('D compiler %s does not support the \"string import directories\" feature.' % self.name_string())\n- for d in import_dirs:\n- res.append('{0}{1}'.format(import_dir_arg, d))\n+ for idir_obj in import_dirs:\n+ basedir = idir_obj.get_curdir()\n+ for idir in idir_obj.get_incdirs():\n+ # Avoid superfluous '/.' at the end of paths when d is '.'\n+ if idir not in ('', '.'):\n+ expdir = os.path.join(basedir, idir)\n+ else:\n+ expdir = basedir\n+ srctreedir = os.path.join(build_to_src, expdir)\n+ res.append('{0}{1}'.format(import_dir_arg, srctreedir))\n \n if kwargs:\n raise EnvironmentException('Unknown D compiler feature(s) selected: %s' % ', '.join(kwargs.keys()))\n", "new_path": "mesonbuild/compilers/d.py", "old_path": "mesonbuild/compilers/d.py" }, { "change_type": "MODIFY", "diff": "@@ -37,6 +37,7 @@ from pathlib import PurePath\n \n import importlib\n \n+\n def stringifyUserArguments(args):\n if isinstance(args, list):\n return '[%s]' % ', '.join([stringifyUserArguments(x) for x in args])\n@@ -247,7 +248,7 @@ class ConfigurationDataHolder(MutableInterpreterObject, ObjectHolder):\n return val\n \n def get(self, name):\n- return self.held_object.values[name] # (val, desc)\n+ return self.held_object.values[name] # (val, desc)\n \n def keys(self):\n return self.held_object.values.keys()\n@@ -816,7 +817,8 @@ class CompilerHolder(InterpreterObject):\n '''\n if not hasattr(self.compiler, 'get_feature_args'):\n raise InterpreterException('This {} compiler has no feature arguments.'.format(self.compiler.get_display_language()))\n- return self.compiler.get_feature_args({'unittest': 'true'})\n+ build_to_src = os.path.relpath(self.environment.get_source_dir(), self.environment.get_build_dir())\n+ return self.compiler.get_feature_args({'unittest': 'true'}, build_to_src)\n \n def has_member_method(self, args, kwargs):\n if len(args) != 2:\n@@ -1309,6 +1311,7 @@ class MesonMain(InterpreterObject):\n return args[1]\n raise InterpreterException('Unknown cross property: %s.' % propname)\n \n+\n pch_kwargs = set(['c_pch', 'cpp_pch'])\n \n lang_arg_kwargs = set([\n@@ -2847,12 +2850,17 @@ root and issuing %s.\n @permittedKwargs(permitted_kwargs['include_directories'])\n @stringArgs\n def func_include_directories(self, node, args, kwargs):\n+ return self.build_incdir_object(args, kwargs.get('is_system', False))\n+\n+ def build_incdir_object(self, incdir_strings, is_system=False):\n+ if not isinstance(is_system, bool):\n+ raise InvalidArguments('Is_system must be boolean.')\n src_root = self.environment.get_source_dir()\n build_root = self.environment.get_build_dir()\n absbase_src = os.path.join(src_root, self.subdir)\n absbase_build = os.path.join(build_root, self.subdir)\n \n- for a in args:\n+ for a in incdir_strings:\n if a.startswith(src_root):\n raise InvalidArguments('''Tried to form an absolute path to a source dir. You should not do that but use\n relative paths instead.\n@@ -2875,10 +2883,7 @@ different subdirectory.\n absdir_build = os.path.join(absbase_build, a)\n if not os.path.isdir(absdir_src) and not os.path.isdir(absdir_build):\n raise InvalidArguments('Include dir %s does not exist.' % a)\n- is_system = kwargs.get('is_system', False)\n- if not isinstance(is_system, bool):\n- raise InvalidArguments('Is_system must be boolean.')\n- i = IncludeDirsHolder(build.IncludeDirs(self.subdir, args, is_system))\n+ i = IncludeDirsHolder(build.IncludeDirs(self.subdir, incdir_strings, is_system))\n return i\n \n @permittedKwargs(permitted_kwargs['add_test_setup'])\n@@ -3106,6 +3111,7 @@ different subdirectory.\n else:\n mlog.debug('Unknown target type:', str(targetholder))\n raise RuntimeError('Unreachable code')\n+ self.kwarg_strings_to_includedirs(kwargs)\n target = targetclass(name, self.subdir, self.subproject, is_cross, sources, objs, self.environment, kwargs)\n if is_cross:\n self.add_cross_stdlib_info(target)\n@@ -3114,6 +3120,23 @@ different subdirectory.\n self.project_args_frozen = True\n return l\n \n+ def kwarg_strings_to_includedirs(self, kwargs):\n+ if 'd_import_dirs' in kwargs:\n+ items = mesonlib.extract_as_list(kwargs, 'd_import_dirs')\n+ cleaned_items = []\n+ for i in items:\n+ if isinstance(i, str):\n+ # BW compatibility. This was permitted so we must support it\n+ # for a few releases so people can transition to \"correct\"\n+ # path declarations.\n+ if i.startswith(self.environment.get_source_dir()):\n+ mlog.warning('''Building a path to the source dir is not supported. Use a relative path instead.\n+This will become a hard error in the future.''')\n+ i = os.path.relpath(i, os.path.join(self.environment.get_source_dir(), self.subdir))\n+ i = self.build_incdir_object([i])\n+ cleaned_items.append(i)\n+ kwargs['d_import_dirs'] = cleaned_items\n+\n def get_used_languages(self, target):\n result = {}\n for i in target.sources:\n@@ -3152,6 +3175,7 @@ different subdirectory.\n if idx >= len(arg_strings):\n raise InterpreterException('Format placeholder @{}@ out of range.'.format(idx))\n return arg_strings[idx]\n+\n return re.sub(r'@(\\d+)@', arg_replace, templ)\n \n # Only permit object extraction from the same subproject\n", "new_path": "mesonbuild/interpreter.py", "old_path": "mesonbuild/interpreter.py" }, { "change_type": "MODIFY", "diff": "@@ -1,8 +1,22 @@\n project('D Features', 'd')\n \n-# directory for data\n+# ONLY FOR BACKWARDS COMPATIBILITY.\n+# DO NOT DO THIS IN NEW CODE!\n+# USE include_directories() INSTEAD OF BUILDING\n+# STRINGS TO PATHS MANUALLY!\n data_dir = join_paths(meson.current_source_dir(), 'data')\n \n+e_plain_bcompat = executable('dapp_menu_bcompat',\n+ 'app.d',\n+ d_import_dirs: [data_dir]\n+)\n+test('dapp_menu_t_fail_bcompat', e_plain_bcompat, should_fail: true)\n+test('dapp_menu_t_bcompat', e_plain_bcompat, args: ['menu'])\n+\n+# directory for data\n+# This is the correct way to do this.\n+data_dir = include_directories('data')\n+\n e_plain = executable('dapp_menu',\n 'app.d',\n d_import_dirs: [data_dir]\n@@ -10,6 +24,7 @@ e_plain = executable('dapp_menu',\n test('dapp_menu_t_fail', e_plain, should_fail: true)\n test('dapp_menu_t', e_plain, args: ['menu'])\n \n+\n # test feature versions and string imports\n e_versions = executable('dapp_versions',\n 'app.d',\n", "new_path": "test cases/d/9 features/meson.build", "old_path": "test cases/d/9 features/meson.build" } ]

🏟️ Long Code Arena (Commit message generation)

This is the benchmark for the Commit message generation task as part of the 🏟️ Long Code Arena benchmark.

The dataset is a manually curated subset of the Python test set from the πŸ€— CommitChronicle dataset, tailored for larger commits.

All the repositories are published under permissive licenses (MIT, Apache-2.0, and BSD-3-Clause). The datapoints can be removed upon request.

How-to

from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-cmg", split="test")

Note that all the data we have is considered to be in the test split.

Note. Working with git repositories under repos directory is not supported via πŸ€— Datasets. See Git Repositories section for more details.

About

Overview

In total, there are 163 commits from 34 repositories. For length statistics, refer to the notebook in our repository.

Dataset Structure

The dataset contains two kinds of data: data about each commit (under commitchronicle-py-long folder) and compressed git repositories (under repos folder).

Commits

Each example has the following fields:

Field Description
repo Commit repository.
hash Commit hash.
date Commit date.
license Commit repository's license.
message Commit message.
mods List of file modifications from a commit.

Each file modification has the following fields:

Field Description
change_type Type of change to current file. One of: ADD, COPY, RENAME, DELETE, MODIFY or UNKNOWN.
old_path Path to file before change (might be empty).
new_path Path to file after change (might be empty).
diff git diff for current file.

Data point example:

{'hash': 'b76ed0db81b3123ede5dc5e5f1bddf36336f3722',
 'repo': 'apache/libcloud',
 'date': '05.03.2022 17:52:34',
 'license': 'Apache License 2.0',
 'message': 'Add tests which verify that all OpenStack driver can be instantiated\nwith all the supported auth versions.\nNOTE: Those tests will fail right now due to the regressions being\nintroduced recently which breaks auth for some versions.',
 'mods': [{'change_type': 'MODIFY',
    'new_path': 'libcloud/test/compute/test_openstack.py',
    'old_path': 'libcloud/test/compute/test_openstack.py',
    'diff': '@@ -39,6 +39,7 @@ from libcloud.utils.py3 import u\n<...>'}],
}    

Git Repositories

The compressed Git repositories for all the commits in this benchmark are stored under repos directory.

Working with git repositories under repos directory is not supported directly via πŸ€— Datasets. You can use huggingface_hub package to download the repositories. The sample code is provided below:

import tarfile
from huggingface_hub import list_repo_tree, hf_hub_download


data_dir = "..."  # replace with a path to where you want to store repositories locally

for repo_file in list_repo_tree("JetBrains-Research/lca-commit-message-generation", "repos", repo_type="dataset"):
    file_path = hf_hub_download(
        repo_id="JetBrains-Research/lca-commit-message-generation",
        filename=repo_file.path,
        repo_type="dataset",
        local_dir=data_dir,
    )

    with tarfile.open(file_path, "r:gz") as tar:
        tar.extractall(path=os.path.join(data_dir, "extracted_repos"))

For convenience, we also provide a full list of files in paths.json.

After you download and extract the repositories, you can work with each repository either via Git or via Python libraries like GitPython or PyDriller.

🏷️ Extra: commit labels

To facilitate further research, we additionally provide the manual labels for all the 858 commits that made it through initial filtering. The final version of the dataset described above consists of commits labeled either 4 or 5.

How-to

from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-cmg", "labels", split="test")

Note that all the data we have is considered to be in the test split.

About

Dataset Structure

Each example has the following fields:

Field Description
repo Commit repository.
hash Commit hash.
date Commit date.
license Commit repository's license.
message Commit message.
label Label of the current commit as a target for CMG task.
comment Comment for a label for the current commit (optional, might be empty).

Labels are in 1–5 scale, where:

  • 1 – strong no
  • 2 – weak no
  • 3 – unsure
  • 4 – weak yes
  • 5 – strong yes

Data point example:

{'hash': '1559a4c686ddc2947fc3606e1c4279062cc9480f',
 'repo': 'appscale/gts',
 'date': '15.07.2018 21:00:39',
 'license': 'Apache License 2.0',
 'message': 'Add auto_id_policy and logs_path flags\n\nThese changes were introduced in the 1.7.5 SDK.',
 'label': 1,
 'comment': 'no way to know the version'}

Citing

@article{bogomolov2024long,
  title={Long Code Arena: a Set of Benchmarks for Long-Context Code Models},
  author={Bogomolov, Egor and Eliseeva, Aleksandra and Galimzyanov, Timur and Glukhov, Evgeniy and Shapkin, Anton and Tigina, Maria and Golubev, Yaroslav and Kovrigin, Alexander and van Deursen, Arie and Izadi, Maliheh and Bryksin, Timofey},
  journal={arXiv preprint arXiv:2406.11612},
  year={2024}
}

You can find the paper here.

Downloads last month
162

Space using JetBrains-Research/lca-commit-message-generation 1

Collection including JetBrains-Research/lca-commit-message-generation