message
stringlengths
13
484
diff
stringlengths
38
4.63k
Change default MacOS path to user extensions directory Compatible with inkscape>=1.0beta2
@@ -67,6 +67,10 @@ class MacDefaults(LinuxDefaults): path += os.environ["PATH"].split(os.path.pathsep) return path + @property + def inkscape_extensions_path(self): + return os.path.expanduser("~/Library/Application Support/org.inkscape.Inkscape/config/inkscape/extensions") + class WindowsDefaults(Defaults):
Update DIB doc bin/disk-image-create was moved in a patch last year, this moves disk-image-create to a regular python entry-point[1].Resulting in our current document will be some problems. It seems that an effective solution is to install DIB. Updated the relevant information. [1]:https://review.openstack.org/#/c/367157/
@@ -135,7 +135,9 @@ operating systems are not yet fully supported. Receiving objects: 100% (8881/8881), 1.92 MiB | 0 bytes/s, done. Resolving deltas: 100% (4668/4668), done. Checking connectivity... done. - user@machine:/opt/stack$ + user@machine:/opt/stack$ cd diskimage-builder + user@machine:/opt/stack/diskimage-builder$ sudo pip install -r requirements.txt + user@machine:/opt/stack/diskimage-builder$ sudo python setup.py install Ensure that you have qemu-img [2]_ and kpartx installed. @@ -146,7 +148,7 @@ takes the following options: .. code-block:: bash - user@machine:/opt/stack/diskimage-builder$ ./bin/disk-image-create -h + user@machine:/opt/stack/diskimage-builder$ disk-image-create -h Usage: disk-image-create [OPTION]... [ELEMENT]... Options: @@ -229,10 +231,10 @@ This command will create a guest image usable by Trove: export ELEMENTS_PATH+=:$PATH_TRIPLEO_ELEMENTS/elements export DIB_APT_CONF_DIR=/etc/apt/apt.conf.d export DIB_CLOUD_INIT_ETC_HOSTS=true - local QEMU_IMG_OPTIONS=$(! $(qemu-img | grep -q 'version 1') && echo "--qemu-img-options compat=0.10") + local QEMU_IMG_OPTIONS="--qemu-img-options compat=1.1" # run disk-image-create that actually causes the image to be built - ${PATH_DISKIMAGEBUILDER}/bin/disk-image-create -a amd64 -o "${VM}" \ + $disk-image-create -a amd64 -o "${VM}" \ -x ${QEMU_IMG_OPTIONS} ${DISTRO} ${EXTRA_ELEMENTS} vm \ cloud-init-datasources ${DISTRO}-guest ${DISTRO}-${SERVICE_TYPE} @@ -559,10 +561,10 @@ build_vm(). We look at this section of code in detail below. export ELEMENTS_PATH+=:$PATH_TRIPLEO_ELEMENTS/elements export DIB_APT_CONF_DIR=/etc/apt/apt.conf.d export DIB_CLOUD_INIT_ETC_HOSTS=true - local QEMU_IMG_OPTIONS=$(! $(qemu-img | grep -q 'version 1') && echo "--qemu-img-options compat=0.10") + local QEMU_IMG_OPTIONS="--qemu-img-options compat=1.1" # run disk-image-create that actually causes the image to be built - ${PATH_DISKIMAGEBUILDER}/bin/disk-image-create -a amd64 -o "${VM}" \ + $disk-image-create -a amd64 -o "${VM}" \ -x ${QEMU_IMG_OPTIONS} ${DISTRO} ${EXTRA_ELEMENTS} vm \ cloud-init-datasources ${DISTRO}-guest ${DISTRO}-${SERVICE_TYPE}
remove note about GitHub Actions ignore "ci skip" since it's no longer true we auto-cancel builds for PRs if they've been replaced with a newer commit, so I don't think we have to be as nudgy about this [skip ci]
@@ -177,10 +177,6 @@ Other Tips $ git commit --amend -- Unfortunately, GitHub Actions ignores ``[ci skip]`` for a PR, so we recommend - you only push your commits to GitHub when you are ready for the CI to run. - Please do not push a lot of commits for every small WIP changes. - - If your commit makes substantial changes to the documentation but none of those changes include code snippets, then you can use ``[ci skip]``, which will skip all CI except RTD, where the documentation is built.
Handle None return fro call GAPI I now can't remember the situation where callGAPI returned None, but Pyton throwing a trap didn't help
@@ -4569,6 +4569,7 @@ def showDriveFileInfo(users): if not drive: continue feed = callGAPI(drive.files(), u'get', fileId=fileId, fields=fields, supportsTeamDrives=True) + if feed: print_json(None, feed) def showDriveFileRevisions(users): @@ -4578,6 +4579,7 @@ def showDriveFileRevisions(users): if not drive: continue feed = callGAPI(drive.revisions(), u'list', fileId=fileId) + if feed: print_json(None, feed) def transferSecCals(users):
mac_system: return False for non-root user Fixes Previously an exception was raised and salt repeatedly tried to load the module. This spammed the console with error messages.
@@ -13,6 +13,8 @@ try: # python 3 except ImportError: # python 2 from pipes import quote as _cmd_quote +import getpass + # Import salt libs import salt.utils from salt.exceptions import CommandExecutionError, SaltInvocationError @@ -28,6 +30,10 @@ def __virtual__(): return (False, 'The mac_system module could not be loaded: ' 'module only works on MacOS systems.') + if getpass.getuser() != 'root': + return False, 'The mac_system module is not useful for non-root users.' + + if not _atrun_enabled(): if not _enable_atrun(): return (False, 'atrun could not be enabled on this system')
[Darts]: Restore Nested Bullets Per issue (_which has been resolved in the linked website issue_), nested bullets were not working in `hints` files. They are now, so this PR restores `hints` for the `Darts` exercise to the nested-bullet state.
## General - This challenge is all about calculating if a point falls _on, in, or outside_ a given circle. -- There are two different ways of calculating if a point falls on, in, or outside a circle: _Stack Overflow_ - [Equation for Testing if a Point is Inside a Circle][point-circle-equation] outlines one method. -_DoubleRoot_ - [Position of a point relative to a circle][point-to-circle] outlines a different one. +- There are two different ways of calculating if a point falls on, in, or outside a circle. + - This _Stack Overflow_ Post: [Equation for Testing if a Point is Inside a Circle][point-circle-equation] outlines one method. + - This _DoubleRoot_ post [Position of a point relative to a circle][point-to-circle] outlines a different one. - This _Math is Fun_ post covers a more general [Distance Between 2 Points][distance-between-two-points] calculation. - Because the dart board is a set of _nested_ circles, the order in which you calculate points could change the answer significantly. You should pay attention to which direction your calculations "move" in.
Add a test helper function to patch multiple attributes with autospecs This helper reduces redundancy/boilerplate by setting default values. It also has the consequence of shortening the length of the invocation, which makes it faster to use and easier to read.
@@ -23,6 +23,15 @@ for logger in logging.Logger.manager.loggerDict.values(): logger.setLevel(logging.CRITICAL) +def autospec(target, *attributes: str, **kwargs) -> unittest.mock._patch: + """Patch multiple `attributes` of a `target` with autospecced mocks and `spec_set` as True.""" + # Caller's kwargs should take priority and overwrite the defaults. + kwargs = {'spec_set': True, 'autospec': True, **kwargs} + attributes = {attribute: unittest.mock.DEFAULT for attribute in attributes} + + return unittest.mock.patch.multiple(target, **attributes, **kwargs) + + class HashableMixin(discord.mixins.EqualityComparable): """ Mixin that provides similar hashing and equality functionality as discord.py's `Hashable` mixin.
Updates player variables instantly Updates all player variables in the text ui instantly. Consolidates the calls to update player, ball, and score to be included in the aggregate player_vars handler.
@@ -117,9 +117,6 @@ class TextUi(MpfController): self._bcp_connected) self.machine.events.add_handler('shutdown', self.stop) self.machine.add_crash_handler(self.stop) - self.machine.events.add_handler('player_number', self._update_player) - self.machine.events.add_handler('player_ball', self._update_player) - self.machine.events.add_handler('player_score', self._update_player) self.machine.events.add_handler('ball_ended', self._update_player) @@ -147,6 +144,9 @@ class TextUi(MpfController): self.machine.events.add_handler("mode_{}_started".format(mode.name), self._mode_change) self.machine.events.add_handler("mode_{}_stopped".format(mode.name), self._mode_change) + for player_var in self.machine.config['player_vars']: + self.machine.events.add_handler("player_{}".format(player_var), self._update_player) + self.machine.switch_controller.add_monitor(self._update_switches) self.machine.register_monitor("machine_vars", self._update_machine_vars) self.machine.variables.machine_var_monitor = True
Cannot serialize dictionaries with `Element` keys [...] This is not an ideal fix, since it does a round-trip Element->str->Element, but works for now.
@@ -366,7 +366,7 @@ class PhaseDiagram(MSONable): self.all_entries = computed_data["all_entries"] self.qhull_data = computed_data["qhull_data"] self.dim = computed_data["dim"] - self.el_refs = computed_data["el_refs"] + self.el_refs = {Element(el): ref for el, ref in computed_data["el_refs"].items()} self.qhull_entries = computed_data["qhull_entries"] self.stable_entries = set(self.qhull_entries[i] for i in set(itertools.chain(*self.facets))) @@ -461,7 +461,7 @@ class PhaseDiagram(MSONable): all_entries=all_entries, qhull_data=qhull_data, dim=dim, - el_refs=el_refs, + el_refs={str(el): ref for el, ref in el_refs.items()}, qhull_entries=qhull_entries, )
[Isolate] Serve discovery document over webapp2 routes endpoints_webapp2.api_routes installs just the service's handlers. endpoints_webapp2.api_server also installs discovery handlers.
@@ -551,4 +551,7 @@ class IsolateService(remote.Service): def get_routes(): - return endpoints_webapp2.api_routes(IsolateService) + return endpoints_webapp2.api_server([ + config.ConfigApi, + IsolateService, + ])
Update mkvtomp4.py improve .original renaming scheme adjust filter if renaming original
@@ -960,11 +960,19 @@ class MkvtoMp4: og = inputfile + ".original" i = 2 while os.path.isfile(og): - og = og + "2" + og = "%s.%d.original" % (inputfile, i) i += 1 os.rename(inputfile, og) + if self.settings.burn_subtitles: + try: + if os.path.basename(inputfile) in options['video'].get('filter', ""): + self.log.debug("Renaming inputfile in burnsubtitles filter if its present [burn-subtitles].") + options['video']['filter'] = options['video']['filter'].replace(os.path.basename(inputfile), os.path.basename(og)) + except: + self.log.exception("Error trying to rename filter [burn-subtitles].") inputfile = og self.log.debug("Renamed original file to %s." % inputfile) + except: i = 2 while os.path.isfile(finaloutputfile):
Sync tests: test Sync cog's on_guild_role_delete listener A DELETE request should be sent.
@@ -168,3 +168,10 @@ class SyncCogListenerTests(SyncCogTestCase): asyncio.run(self.cog.on_guild_role_create(role)) self.bot.api_client.post.assert_called_once_with("bot/roles", json=role_data) + + def test_sync_cog_on_guild_role_delete(self): + """A DELETE request should be sent.""" + role = helpers.MockRole(id=99) + asyncio.run(self.cog.on_guild_role_delete(role)) + + self.bot.api_client.delete.assert_called_once_with("bot/roles/99")
added the dates support for top machine names route. what : added the support for dates in top machinenames for honeypot/network events and also removed the todos from the file as it is implemented
@@ -891,6 +891,10 @@ def top_network_machine_names(): Returns: JSON/Dict top network machine names in network events """ + date = fix_date( + get_value_from_request("date") + ) + top_machinenames_query = [ top_machine_names_groupby, sort_by_count_and_id, @@ -905,6 +909,16 @@ def top_network_machine_names(): ) } ] + if date: + match_by_date = { + "$match": { + "date": { + "$gte": date[0], + "$lte": date[1] + } + } + } + top_machinenames_query.insert(0, match_by_date) try: return jsonify( aggregate_function( @@ -924,6 +938,9 @@ def top_honeypot_machine_names(): Returns: JSON/Dict top honeypot machine names """ + date = fix_date( + get_value_from_request("date") + ) top_machinenames_query = [ top_machine_names_groupby, sort_by_count_and_id, @@ -938,6 +955,16 @@ def top_honeypot_machine_names(): ) } ] + if date: + match_by_date = { + "$match": { + "date": { + "$gte": date[0], + "$lte": date[1] + } + } + } + top_machinenames_query.insert(0, match_by_date) try: return jsonify( aggregate_function( @@ -949,10 +976,6 @@ def top_honeypot_machine_names(): return flask_null_array_response() -# todo: combine api calls with date support -# todo: rename API calls from top_ten - - def start_api_server(): """ start API server
[tests] generic modules tests don't work on Travis Maybe this fixes it...
@@ -12,10 +12,6 @@ from bumblebee.config import Config class TestGenericModules(unittest.TestCase): def setUp(self): - self.popen = mocks.MockPopen() - self.popen.mock.communicate.return_value = (str.encode("1"), "error") - self.popen.mock.returncode = 0 - engine = mock.Mock() engine.input = mock.Mock() config = Config() @@ -26,10 +22,11 @@ class TestGenericModules(unittest.TestCase): for widget in self.objects[mod["name"]].widgets(): self.assertEquals(widget.get("variable", None), None) - def tearDown(self): - self.popen.cleanup() - def test_widgets(self): + popen = mocks.MockPopen() + popen.mock.communicate.return_value = (str.encode("1"), "error") + popen.mock.returncode = 0 + for mod in self.objects: widgets = self.objects[mod].widgets() for widget in widgets: @@ -40,12 +37,17 @@ class TestGenericModules(unittest.TestCase): widget.set("variable", "value") self.assertEquals(widget.get("variable", None), "value") self.assertTrue(isinstance(widget.full_text(), str) or isinstance(widget.full_text(), unicode)) + popen.cleanup() def test_update(self): + popen = mocks.MockPopen() + popen.mock.communicate.return_value = (str.encode("1"), "error") + popen.mock.returncode = 0 for mod in self.objects: widgets = self.objects[mod].widgets() self.objects[mod].update(widgets) self.test_widgets() self.assertEquals(widgets, self.objects[mod].widgets()) + popen.cleanup() # vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4
REPO Table Update Header and inline Comments. Added note about the changes updating the REPO table as well as the REPO_INFO table.
@@ -8,6 +8,14 @@ from sqlalchemy import MetaData from sqlalchemy.ext.automap import automap_base from workers.worker_base import Worker +# NOTE: This worker primarily inserts rows into the REPO_INFO table, which serves the primary purposes of +# 1. Displaying discrete metadata like "number of forks" and how they change over time +# 2. Validating other workers, like those related to pull requests, issues, and commits. Our totals should be at or very near the totals in the repo_info table. + +# This table also updates teh REPO table in 2 cases: +# 1. Recognizing when a repository is a forked repository by updating the "forked_from" field and +# 2. Recognizing when a repository is archived, and recording the data we observed the change in status. + class RepoInfoWorker(Worker): def __init__(self, config): @@ -137,6 +145,7 @@ class RepoInfoWorker(Worker): # Get committers count info that requires seperate endpoint committers_count = self.query_committers_count(owner, repo) + # Note that the addition of information about where a repository may be forked from, and whether a repository is archived, updates the `repo` table, not the `repo_info` table. forked = self.is_forked(owner, repo) archived = self.is_archived(owner, repo) if archived is not False:
Downscale stroke limit by the sqrt(abs(det))) Fixes hogwarts error.
-from math import ceil, floor +from math import ceil, floor, sqrt import wx from PIL import Image @@ -16,7 +16,7 @@ from ..svgelements import ( QuadraticBezier, CubicBezier, Arc, - Matrix, + Matrix, Length, ) from .zmatrix import ZMatrix @@ -155,12 +155,15 @@ class LaserRender: else: gc.SetBrush(wx.TRANSPARENT_BRUSH) - def set_element_pen(self, gc, element, zoomscale=1.0): + def set_element_pen(self, gc, element, zoomscale=1.0, width_scale=None): try: - sw = element.stroke_width if element.stroke_width is not None else 1.0 + sw = element.stroke_width except AttributeError: sw = 1.0 - limit = zoomscale ** 0.5 + if sw is None: + sw = 1.0 + limit = zoomscale**.5 + limit /= width_scale if sw < limit: sw = limit self.set_pen(gc, element.stroke, width=sw) @@ -175,14 +178,16 @@ class LaserRender: """Default draw routine for the shape element.""" try: matrix = element.transform + width_scale = sqrt(abs(matrix.determinant)) except AttributeError: matrix = Matrix() + width_scale = 1.0 if not hasattr(element, "cache") or element.cache is None: cache = self.make_path(gc, Path(element)) element.cache = cache gc.PushState() gc.ConcatTransform(wx.GraphicsContext.CreateMatrix(gc, ZMatrix(matrix))) - self.set_element_pen(gc, element, zoomscale=zoomscale) + self.set_element_pen(gc, element, zoomscale=zoomscale, width_scale=width_scale) self.set_element_brush(gc, element) if draw_mode & DRAW_MODE_FILLS == 0 and element.fill is not None: gc.FillPath(element.cache) @@ -194,14 +199,16 @@ class LaserRender: """Default draw routine for the laser path element.""" try: matrix = element.transform + width_scale = sqrt(abs(matrix.determinant)) except AttributeError: matrix = Matrix() + width_scale = 1.0 if not hasattr(element, "cache") or element.cache is None: cache = self.make_path(gc, element) element.cache = cache gc.PushState() gc.ConcatTransform(wx.GraphicsContext.CreateMatrix(gc, ZMatrix(matrix))) - self.set_element_pen(gc, element, zoomscale=zoomscale) + self.set_element_pen(gc, element, zoomscale=zoomscale, width_scale=width_scale) self.set_element_brush(gc, element) if draw_mode & DRAW_MODE_FILLS == 0 and element.fill is not None: gc.FillPath(element.cache) @@ -212,8 +219,10 @@ class LaserRender: def draw_text(self, element, gc, draw_mode, zoomscale=1.0): try: matrix = element.transform + width_scale = sqrt(abs(matrix.determinant)) except AttributeError: matrix = Matrix() + width_scale = 1.0 if hasattr(element, "wxfont"): font = element.wxfont else: @@ -228,7 +237,7 @@ class LaserRender: gc.PushState() gc.ConcatTransform(wx.GraphicsContext.CreateMatrix(gc, ZMatrix(matrix))) - self.set_element_pen(gc, element, zoomscale=zoomscale) + self.set_element_pen(gc, element, zoomscale=zoomscale, width_scale=width_scale) self.set_element_brush(gc, element) if element.fill is None or element.fill == "none":
Update fuel_tariff.py missing copyright
+# ********************************************************************************* +# REopt, Copyright (c) 2019-2020, Alliance for Sustainable Energy, LLC. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without modification, +# are permitted provided that the following conditions are met: +# +# Redistributions of source code must retain the above copyright notice, this list +# of conditions and the following disclaimer. +# +# Redistributions in binary form must reproduce the above copyright notice, this +# list of conditions and the following disclaimer in the documentation and/or other +# materials provided with the distribution. +# +# Neither the name of the copyright holder nor the names of its contributors may be +# used to endorse or promote products derived from this software without specific +# prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND +# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED +# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. +# IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, +# INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, +# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF +# LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE +# OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED +# OF THE POSSIBILITY OF SUCH DAMAGE. +# ********************************************************************************* class FuelTariff(object): """ Contains information relevant to construct a fuel tariff
Optimization: Do not merge conditional statements with only one exiting branch * When a conditional statement has a branch that does not exit, and one that does, a useless branch merge was conducted. * This should also help scalability by avoiding useless branch merges.
@@ -644,6 +644,9 @@ branches.""", self.clearChild("no_branch") no_branch = None + # Do we need to merge branches + needs_merge = True + # Continue to execute for yes branch unless we know it's not going to be # relevant. if yes_branch is not None: @@ -654,8 +657,11 @@ branches.""", yes_branch = branch_yes_collection.computeBranch(branch=yes_branch) # If it's aborting, it doesn't contribute to merging. - if yes_branch is None or yes_branch.isStatementAborting(): + if yes_branch is None: + branch_yes_collection = None + elif yes_branch.isStatementAborting(): branch_yes_collection = None + needs_merge = False else: branch_yes_collection = None @@ -668,8 +674,11 @@ branches.""", no_branch = branch_no_collection.computeBranch(branch=no_branch) # If it's aborting, it doesn't contribute to merging. - if no_branch is None or no_branch.isStatementAborting(): + if no_branch is None: branch_no_collection = None + elif no_branch.isStatementAborting(): + branch_no_collection = None + needs_merge = False else: branch_no_collection = None @@ -680,7 +689,15 @@ branches.""", if branch_no_collection is not None: trace_collection.replaceBranch(branch_no_collection) else: - trace_collection.mergeBranches(branch_yes_collection, branch_no_collection) + if needs_merge: + trace_collection.mergeBranches( + branch_yes_collection, branch_no_collection + ) + else: + if branch_yes_collection is not None: + trace_collection.replaceBranch(branch_yes_collection) + elif branch_no_collection is not None: + trace_collection.replaceBranch(branch_no_collection) # Both branches may have become empty, which case, the statement needs # not remain.
Fix a typo in docker_puppet_apply.sh In change we did a refactoring of the puppet apply commmands into a single script. A typo slipped in where we set FACTOR_uuid instead of FACTER_uuid Closes-Bug:
@@ -65,7 +65,7 @@ outputs: cp -a /tmp/puppet-etc/* /etc/puppet || true fi echo "{\"step\": ${STEP}}" > /etc/puppet/hieradata/docker.json - export FACTOR_uuid=docker + export FACTER_uuid=docker set +e puppet apply $EXTRA_ARGS \ --verbose \
Add new entrypoint 'experimental' for capsules According to the comment in introduce the new api endpoint for capsules will reduce the API burden when changing.
@@ -147,6 +147,7 @@ function upload_sandbox_image { function create_zun_accounts { create_service_user "zun" "admin" + create_service_user "zun-experimental" "admin" if is_service_enabled zun-api; then @@ -159,11 +160,18 @@ function create_zun_accounts { local zun_service=$(get_or_create_service "zun" \ "container" "Container As Service") + local zun_experimental_service=$(get_or_create_service "zun-experimental" \ + "container-experimental" "Container As Service - Experimental") get_or_create_endpoint $zun_service \ "$REGION_NAME" \ "$zun_api_url/v1" \ "$zun_api_url/v1" \ "$zun_api_url/v1" + get_or_create_endpoint $zun_experimental_service \ + "$REGION_NAME" \ + "$zun_api_url/experimental" \ + "$zun_api_url/experimental" \ + "$zun_api_url/experimental" fi }
DOC: fixed docstring bug Added attributed that was removed by mistake.
@@ -2424,7 +2424,7 @@ class Instrument(object): # Case is retained within inst.meta, though data access to meta is # case insensitive - print('True meta variable name is ', inst.meta['pysat_uts'].) + print('True meta variable name is ', inst.meta['pysat_uts'].name) # Note that the labels in meta may be used when creating a file. # Thus, 'Pysat_UTS' would be found in the resulting file
Update TESTS.md Just a stray parenthesis in there. nbd.
@@ -14,7 +14,8 @@ Continue reading below for plugin installation. We also recommend [pylint](https://pylint.pycqa.org/en/latest/user_guide/), as it is part of our automated feedback on the website, and can be a very useful (if noisy!) code analysis tool. Pylint can be a bit much, so this [tutorial](https://pylint.pycqa.org/en/latest/tutorial.html) can be helpful for getting started, as can this overview of [Code Quality: Tools and Best Practices](https://realpython.com/python-code-quality/) from Real Python. -Finally, [this site])(https://pycodequ.al/docs/pylint-messages.html) is a great place to look up more human-readable descriptions of Pylint linting messages. + +Finally, [this site](https://pycodequ.al/docs/pylint-messages.html) is a great place to look up more human-readable descriptions of Pylint linting messages. ### Installing `pytest`
qt tx dialog: use WaitingDialog for network requests in __init__ follow
@@ -233,7 +233,11 @@ class BaseTxDialog(QDialog, MessageBoxMixin): # As a result, e.g. we might learn an imported address tx is segwit, # or that a beyond-gap-limit address is is_mine. # note: this might fetch prev txs over the network. - tx.add_info_from_wallet(self.wallet) + BlockingWaitingDialog( + self, + _("Adding info to tx, from wallet and network..."), + lambda: tx.add_info_from_wallet(self.wallet), + ) def do_broadcast(self): self.main_window.push_top_level_window(self)
Updating broken tutorial links fixed broken tutorial link added NASA-NEX data plotting example
@@ -40,7 +40,11 @@ Resources: DataAtWork: Tutorials: - Title: Accessing and plotting NASA-NEX data, from GEOSChem-on-cloud tutorial. - URL: http://cloud-gc.readthedocs.io/en/stable/chapter02_beginner-tutorial/use-s3.html#access-nasa-nex-data-in-s3-optional-but-recommended + URL: https://cloud-gc.readthedocs.io/en/latest/chapter02_beginner-tutorial/use-s3.html#access-nasa-nex-data-in-s3-optional-but-recommended + AuthorName: Jiawei Zhuang + AuthorURL: https://github.com/JiaweiZhuang + - Title: Sample Python code to analyze NASA-NEX data. + URL: https://cloud-gc.readthedocs.io/en/latest/chapter06_appendix/plot_NASANEX.html AuthorName: Jiawei Zhuang AuthorURL: https://github.com/JiaweiZhuang Tools & Applications:
Pagination migrations - Data Structure Modified Changed the pagination emoji collection from list to tuple This change was suggested since this collection is constant
@@ -12,7 +12,7 @@ RIGHT_EMOJI = "\u27A1" # [:arrow_right:] LAST_EMOJI = "\u23ED" # [:track_next:] DELETE_EMOJI = "<:trashcan:637136429717389331>" # [:trashcan:] -PAGINATION_EMOJI = [FIRST_EMOJI, LEFT_EMOJI, RIGHT_EMOJI, LAST_EMOJI, DELETE_EMOJI] +PAGINATION_EMOJI = (FIRST_EMOJI, LEFT_EMOJI, RIGHT_EMOJI, LAST_EMOJI, DELETE_EMOJI) log = logging.getLogger(__name__)
Re-adding newline Adding newline back with proper format. Requires a backslash in front.
@@ -7,7 +7,8 @@ type: Anomaly datamodel: [] description: The following hunting analytic leverages Kerberos Event 4769, A Kerberos service ticket was requested, to identify a potential kerberoasting attack against Active Directory networks. Kerberoasting allows an adversary to request kerberos tickets for domain accounts typically used as service accounts and - attempt to crack them offline allowing them to obtain privileged access to the domain. + attempt to crack them offline allowing them to obtain privileged access to the domain.\ + The detection calculates the standard deviation for each host and leverages the 3-sigma statistical rule to identify an unusual number service ticket requests. To customize this analytic, users can try different combinations of the `bucket` span time and the
[fix] ChIP-seq runs only in single-end mode argparse argument --single-end was set to always produce "paired=False", such that ChIP-seq only runs in single-end mode.
@@ -106,7 +106,7 @@ def parse_args(defaults={"verbose":None,"configfile":None,"max_jobs":None,"snake dest="paired", action="store_false", help="input data is single-end, not paired-end (default: '%(default)s')", - default=not(defaults["paired"])) + default=defaults["paired"]) optional.add_argument("--bw-binsize", dest="bw_binsize", @@ -145,6 +145,7 @@ def main(): ## defaults defaults = cf.load_configfile(os.path.join(this_script_dir, "defaults.yaml"),False) + print(defaults) ## get command line arguments parser = parse_args(defaults) args = parser.parse_args()
Raise a Property_Error for all predicate alls on null nodes The semantics for calling a property on a node is to raise a Property_Error when the node is null, whether or not the property is dispatching or not. TN:
Node : constant ${formal_node_types[0].name} := ${formal_node_types[0].name} (Node_0.El); begin - ## If prop is dispatching, check that Node, on which the property call - ## below can dispatch, is ## null first so we have a chance to raise a - ## proper Property_Error. - % if prop.dispatching and not ctx.no_property_checks: + ## Check that Node is null first so we have a chance to raise a proper + ## Property_Error. + % if not ctx.no_property_checks: if Node = null then raise Property_Error with "predicate call on a null node"; end if;
Add are_conjugate function in OperationAnalyzer to compare two operations... Differs from is_conjugate in that an OperationAnalyzer object need not be created
@@ -197,11 +197,23 @@ class OperationAnalyzer(SymmOp): ''' if type(op2) != OperationAnalyzer: opa2 = OperationAnalyzer(op2) - else: opa2 = deepcopy(op2) if opa2.type == self.type and opa2.order == self.order: return True else: return False + else: + if opa2.type == self.type and opa2.order == self.order: + return True + else: + return False + + def are_conjugate(op1, op2) + ''' + Returns whether two operations are conjugate + ''' + if type(op1) != OperationAnalyzer: + opa1 = OperationAnalyzer(op1) + return opa1.is_conjugate(op2) #Test Functionality if __name__ == "__main__":
client: use %s in logging error_code can be None
@@ -738,7 +738,7 @@ def _get_luci_context_access_token(local_auth): if error_code or not access_token: logging.error( - 'local_auth: Error %d in retrieving access token: %s', + 'local_auth: Error %s in retrieving access token: %s', error_code, error_message) return None
Update README.md Change broken virtualenv link (dead).
@@ -57,7 +57,7 @@ Clone the [source code repository](https://github.com/Sage-Bionetworks/synapsePy #### Install develop branch -Installing the [develop](https://github.com/Sage-Bionetworks/synapsePythonClient/tree/develop) branch can be useful for testing or for access to the latest features, with the acceptance of an increased risk of experiencing bugs. Using [virtualenv](http://www.virtualenv.org/) to create an isolated test environment is a good idea. +Installing the [develop](https://github.com/Sage-Bionetworks/synapsePythonClient/tree/develop) branch can be useful for testing or for access to the latest features, with the acceptance of an increased risk of experiencing bugs. Using [virtualenv](https://virtualenv.pypa.io/) to create an isolated test environment is a good idea. git clone git://github.com/Sage-Bionetworks/synapsePythonClient.git cd synapsePythonClient
Catch error in case of non successfull API call for new ot-names This allows to keep the task running even if the call fail.
@@ -49,9 +49,12 @@ async def update_names(bot: Bot) -> None: seconds_to_sleep = (next_midnight - datetime.utcnow()).seconds + 1 await asyncio.sleep(seconds_to_sleep) + try: channel_0_name, channel_1_name, channel_2_name = await bot.api_client.get( 'bot/off-topic-channel-names', params={'random_items': 3} ) + except bot.api.ResponseCodeError as e: + log.error(f"Failed to get new off topic channel names: code {e.response.status}") channel_0, channel_1, channel_2 = (bot.get_channel(channel_id) for channel_id in CHANNELS) await channel_0.edit(name=f'ot0-{channel_0_name}')
Don't error if the STP name is None As a later comment says, the name only appears in the first row for each set of STP members
@@ -35,12 +35,12 @@ class Command(BaseCommand): stp_code = row[1] stp_ons_code = row[3] - stp_name = row[4].strip() + stp_name = row[4] stp, _ = STP.objects.get_or_create(ons_code=stp_ons_code) stp.code = stp_code if stp_name: # The stp_name is only present in the first row that an STP appears! - stp.name = stp_name + stp.name = stp_name.strip() stp.regional_team = rt stp.save()
Change installation instructions for macOS to use pyenv-virtualenv to better isolate from changes to homebrew's python
@@ -25,11 +25,18 @@ Use Homebrew to install OpenSSL: $ brew install openssl -Use Homebrew to install Python 2 (make note of the installed path that is printed after successful installation): +Use Homebrew to install pyenv-virtualenv: .. code-block:: console - $ brew install python + $ brew install pyenv-virtualenv + +After installing pyenv-virtualenv, edit your ~/.bash_profile and add the following lines at the end of the file to activate pyenv in your shell by default: + +.. code-block:: console + + eval "$(pyenv init -)" + eval "$(pyenv virtualenv-init -)" Windows ^^^^^^^ @@ -43,29 +50,35 @@ For Windows, go to Control Panel -> System and Security -> System -> Advanced Sy Create Virtual Environment -------------------------- -Install `virtualenv <https://virtualenv.pypa.io/en/stable/>`_: +A Python Virtual Environment (virtualenv) is an isolated Python environment where you can install packages without modifying the system Python. Using a virtualenv for cumulusci is recommended to avoid issues and conflicts with other applications using your system Python. -.. code-block:: console +macOS +^^^^^ - $ pip2 install virtualenv +.. code-block:: console -Create a virtual environment using the Python executable path, then activate the virtual environment. The final part of the virtualenv path should be "cumulusci" so that it shows in the shell session when the virtual environment is activated. You could change this to something else if you want. + $ pyenv install 2.7.14 + $ pyenv virtualenv 2.7.14 cci + $ # Copy the following line to ~/.bash_profile to automatically activate the virtual environment in all new shells. + $ pyenv activate cci -macOS +Your shell prompt should change once you are in the virtual env to show (cci) at the start of the prompt. You can exit the cci virtualenv with the following command: .. code-block:: console - $ virtualenv --python=/usr/local/opt/python/libexec/bin/python ~/venvs/cumulusci/ - $ # Copy the following line to ~/.bash_profile to automatically activate the virtual environment in all new shells. - $ source ~/venvs/cumulusci/bin/activate + (cci) $ pyenv deactivate + +For more information about pyenv-virtualenv, see the project's README: https://github.com/pyenv/pyenv-virtualenv/blob/master/README.md Windows +^^^^^^^ .. code-block:: powershell - mkdir C:\Python27\venvs\cumulusci\ - virtualenv --python=C:\Python27\python.exe C:\Python27\venvs\cumulusci\ - source C:\Python27\venvs\cumulusci\Scripts\activate + pip2 install virtualenv + mkdir C:\Python27\venvs\cci\ + virtualenv --python=C:\Python27\python.exe C:\Python27\venvs\cci\ + source C:\Python27\venvs\cci\Scripts\activate Install CumulusCI -----------------
Fixes spam fields Prevent forward addon from being checked for spam if it's not present.
@@ -1140,7 +1140,10 @@ class AbstractNode(DirtyFieldsMixin, TypedModel, AddonModelMixin, IdentifierMixi def get_spam_fields(self, saved_fields): # Override for SpamOverrideMixin - return self.SPAM_CHECK_FIELDS if self.is_public and 'is_public' in saved_fields else self.SPAM_CHECK_FIELDS.intersection( + check_fields = self.SPAM_CHECK_FIELDS.copy() + if not self.has_addon('forward'): + check_fields.remove('addons_forward_node_settings__url') + return check_fields if self.is_public and 'is_public' in saved_fields else check_fields.intersection( saved_fields) def callback(self, callback, recursive=False, *args, **kwargs):
multi_file_day bug in Files Fixed a bug in files that didn't consider whether or not instrument has multiple files per day before removing 'duplicates'
@@ -78,8 +78,34 @@ class Files(object): """ def __init__(self, sat, manual_org=False, directory_format=None, - update_files=False, file_format=None, - write_to_disk=True): + update_files=False, file_format=None, write_to_disk=True): + """ Initialization for Files class object + + Parameters + ----------- + sat : pysat._instrument.Instrument + Instrument object + manual_org : boolian + If True, then pysat will look directly in pysat data directory + for data files and will not use default /platform/name/tag + (default=False) + directory_format : string or NoneType + directory naming structure in string format. Variables such as + platform, name, and tag will be filled in as needed using python + string formatting. The default directory structure would be + expressed as '{platform}/{name}/{tag}' (default=None) + update_files : boolean + If True, immediately query filesystem for instrument files and store + (default=False) + file_format : str or NoneType + File naming structure in string format. Variables such as year, + month, and sat_id will be filled in as needed using python string + formatting. The default file format structure is supplied in the + instrument list_files routine. (default=None) + write_to_disk : boolean + If true, the list of Instrument files will be written to disk. + Prevents a rare condition when running multiple pysat processes. + """ # pysat.Instrument object self._sat = weakref.proxy(sat) @@ -144,7 +170,8 @@ class Files(object): """ if not files_info.empty: - if (len(files_info.index.unique()) != len(files_info)): + if(not self._sat.multi_file_day and + len(files_info.index.unique()) != len(files_info)): estr = 'WARNING! Duplicate datetimes in provided file ' estr = '{:s}information.\nKeeping one of each '.format(estr) estr = '{:s}of the duplicates, dropping the rest.'.format(estr)
describe rosenstein method though not convinced yet about the delay finding method and `min_tsep` :neutral_face:
@@ -23,6 +23,16 @@ def complexity_lyapunov( of the dimensionality of the phase space, and the largest LE value, `L1` is often used to determine the overall predictability of the dynamical system. + Different algorithms: + + - Rosenstein et al.'s (1993) algorithm was designed for calculating LLEs from small datasets. + The time series is first reconstructed using a delay-embedding method, and the closest neighbour + of each vector is computed using the euclidean distance. These two neighbouring points are then + tracked along their distance trajectories for a number of data points. The slope of the line + using a least-squares fit of the mean log trajectory of the distances gives the final LLE. + + Parameters + ---------- signal : Union[list, np.array, pd.Series] The signal (i.e., a time series) in the form of a vector of values. delay : int, None @@ -88,9 +98,6 @@ def complexity_lyapunov( # construct matrix with pairwise distances between vectors in orbit dists = sklearn.metrics.pairwise.euclidean_distances(embedded) - - min_dist = np.zeros(m) - min_dist_indices = np.zeros(m) for i in range(m): # Exclude indices within min_tsep dists[i, max(0, i - min_tsep) : i + min_tsep + 1] = np.inf @@ -128,6 +135,14 @@ def _complexity_lyapunov_delay(signal): """ # not sure if this is better to be in `optim_complexity_delay` or if this is specific # only for lyapunov + + # From `nolds` + # f = np.fft.rfft(data, n * 2 - 1) + # acorr = np.fft.irfft(f * np.conj(f)) + # acorr = np.roll(acorr, n - 1) + # eps = acorr[n - 1] * (1 - 1.0 / np.e) + # lag = 1 + threshold = 1 - 1 / np.e delay = np.where(signal_autocor(signal, method='fft')[0] < threshold)[0][0]
swarming: show timestamp in log of local_smoke_test This is to investigate slowness of local_smoke_test.
@@ -1421,7 +1421,10 @@ def main(): # kept. Only the last test case will leak these two directories. sys.argv.remove('--leak') if verbose: - logging.basicConfig(level=logging.INFO) + logging.basicConfig( + format='%(asctime)s %(filename)s:%(lineno)d %(levelname)s %(message)s', + level=logging.INFO) + Test.maxDiff = None else: logging.basicConfig(level=logging.ERROR)
DOC: Fix docstrings for expressions module. * DOC: Fix docstrings for expressions module. Fix the reference to `projection` function, which was a documentation build error. Also, change docstring for `Coalesce` to clean up warnings on indentation and lines between examples. Discovered while building docs via `make info`.
@@ -857,6 +857,8 @@ def binop_inputs(expr): class Coalesce(Expr): """SQL like coalesce. + .. code-block python + coalesce(a, b) = { a if a is not NULL b otherwise @@ -866,10 +868,13 @@ class Coalesce(Expr): -------- >>> coalesce(1, 2) 1 + >>> coalesce(1, None) 1 + >>> coalesce(None, 2) 2 + >>> coalesce(None, None) is None True """ @@ -979,7 +984,7 @@ def drop_field(expr, field, *fields): See Also -------- - :func:`~blaze.expr.expression.projection` + :func:`blaze.expr.expressions.projection` """ to_remove = set((field,)).union(fields) new_fields = []
MAINT: Remove unneeded call to PyUnicode_READY Both PyUnicode_GetLength and PyUnicode_AsUCS4Copy call PyUnicode_READY internally.
@@ -385,9 +385,7 @@ unicodetype_@form@(PyObject *self) PyObject *new; PyObject *ret; - if (PyUnicode_READY(self) < 0) { - return NULL; - } + /* PyUnicode_READY is called by PyUnicode_GetLength */ len = PyUnicode_GetLength(self); ip = PyUnicode_AsUCS4Copy(self); if (ip == NULL) {
add outputs to threatconnect add outputs to threatconnect
@@ -409,6 +409,25 @@ script: - name: confidenceThreshold description: filter on indicators confidence grater then given value(value between 0 to 100) + outputs: + - contextPath: IP.Address + description: Bad IP Address found + - contextPath: IP.Malicious.Vendor + description: For malicious IPs, the vendor that made the decision + - contextPath: DBotScore.Indicator + description: The indicator we tested + - contextPath: DBotScore.Type + description: The type of the indicator + - contextPath: DBotScore.Vendor + description: Vendor used to calculate the score + - contextPath: DBotScore.Score + description: The actual score + - contextPath: IP.Malicious.Description + description: For malicious IPs, the full Description + important: + - contextPath: IP(val.Malicious) + description: Malicious IPs + related: "" description: search for IP indicator - name: url arguments: @@ -425,6 +444,25 @@ script: - name: confidenceThreshold description: filter on indicators confidence grater then given value(value between 0 to 100) + outputs: + - contextPath: URL.Data + description: Bad URLs found + - contextPath: URL.Malicious.Vendor + description: For malicious URLs, the vendor that made the decision + - contextPath: URL.Malicious.Description + description: For malicious URLs, the reason for the vendor to make the decision + - contextPath: DBotScore.Indicator + description: The indicator we tested + - contextPath: DBotScore.Type + description: The type of the indicator + - contextPath: DBotScore.Vendor + description: Vendor used to calculate the score + - contextPath: DBotScore.Score + description: The actual score + important: + - contextPath: URL(val.Malicious) + description: Malicious URLs + related: "" description: search for URL indicator - name: file arguments: @@ -441,6 +479,29 @@ script: - name: confidenceThreshold description: filter on indicators confidence grater then given value(value between 0 to 100) + outputs: + - contextPath: File.MD5 + description: Bad hash found + - contextPath: File.SHA1 + description: Bad hash SHA1 + - contextPath: File.SHA256 + description: Bad hash SHA256 + - contextPath: File.Malicious.Vendor + description: For malicious files, the vendor that made the decision + - contextPath: File.Malicious.Description + description: For malicious files, the reason for the vendor to make the decision + - contextPath: DBotScore.Indicator + description: The indicator we tested + - contextPath: DBotScore.Type + description: The type of the indicator + - contextPath: DBotScore.Vendor + description: Vendor used to calculate the score + - contextPath: DBotScore.Score + description: The actual score + important: + - contextPath: File(val.Malicious) + description: Malicious Files + related: "" description: search for File indicator - name: tc-owners arguments: [] @@ -525,7 +586,7 @@ script: - name: incidentName description: filter by incident name outputs: - - contextPath: threatconnect.incidents + - contextPath: ThreatConnect.incidents description: Incidents from ThreatConnect description: Fetch TC incidents - name: tc-incident-associate-indicator
m1n1.hv: Handle MMIO/PMGR hooks dynamically This makes m1n1 boot all the way on t6000
@@ -983,8 +983,10 @@ class HV(Reloadable): self.iface.set_event_handler(EVENT.MMIOTRACE, self.handle_mmiotrace) self.iface.set_event_handler(EVENT.IRQTRACE, self.handle_irqtrace) - # Map MMIO range as HW by default) - self.add_tracer(range(0x2_00000000, 0x7_00000000), "HW", TraceMode.OFF) + # Map MMIO ranges as HW by default + for r in self.adt["/arm-io"].ranges: + print(f"Mapping MMIO range: {r.parent_addr:#x} .. {r.parent_addr + r.size:#x}") + self.add_tracer(irange(r.parent_addr, r.size), "HW", TraceMode.OFF) hcr = HCR(self.u.mrs(HCR_EL2)) if self.novm: @@ -1063,17 +1065,33 @@ class HV(Reloadable): self.log(f"PMGR R {base:x}+{off:x}:{width} = 0x{data:x} -> 0x{ret:x}") return ret - pmgr0_start, _ = self.adt["/arm-io/pmgr"].get_reg(0) - - pmgr_hooks = (0x23b7001c0, 0x23b700220, 0x23b700270) # UART0 - if self.iodev == IODEV.USB0: - pmgr_hooks += (0x23d280098, 0x23d280088) + atc = "ATC0_USB" elif self.iodev == IODEV.USB1: - pmgr_hooks += (0x23d2800a0, 0x23d280090) + atc = "ATC1_USB" + + hook_devs = ["UART0", atc] + + pmgr = self.adt["/arm-io/pmgr"] + dev_by_name = {dev.name: dev for dev in pmgr.devices} + dev_by_id = {dev.id: dev for dev in pmgr.devices} + + pmgr_hooks = [] + + def hook_pmgr_dev(dev): + ps = pmgr.ps_regs[dev.psreg] + if dev.psidx or dev.psreg: + addr = pmgr.get_reg(ps.reg)[0] + ps.offset + dev.psidx * 8 + pmgr_hooks.append(addr) + for idx in dev.parents: + if idx in dev_by_id: + hook_pmgr_dev(dev_by_id[idx]) + + for name in hook_devs: + dev = dev_by_name[name] + hook_pmgr_dev(dev) - # XNU bug workaround: don't let ATCx_COMMON power down or reset - pmgr_hooks += (0x23b700420, 0x23b700448) + pmgr0_start = pmgr.get_reg(0)[0] for addr in pmgr_hooks: self.map_hook(addr, 4, write=wh, read=rh)
GDB helpers: record the line number associated to the State instances TN:
@@ -21,7 +21,7 @@ class State(object): Holder for the execution state of a property. """ - def __init__(self, prop): + def __init__(self, line_no, prop): self.property = prop """ :type: langkit.gdb.debug_info.Property @@ -43,6 +43,13 @@ class State(object): Stack of expressions that are being evaluated. """ + self.line_no = line_no + """ + :type: int + The line number in the generated source code where execution was when + this state was decoded. + """ + @classmethod def decode(cls, context, frame): """ @@ -62,7 +69,7 @@ class State(object): return None # Create the result, add the property root scope - result = cls(prop) + result = cls(line_no, prop) root_scope_state = ScopeState(result, None, prop) result.scopes.append(root_scope_state)
implement match_local_id Implement MongoDB version of function to look for an existing persistent NameId for a user.
@@ -5,6 +5,8 @@ from pymongo import MongoClient from pymongo.mongo_replica_set_client import MongoReplicaSetClient import pymongo.uri_parser import pymongo.errors +from saml2.saml import NAMEID_FORMAT_PERSISTENT + from saml2.eptid import Eptid from saml2.mdstore import InMemoryMetaData from saml2.mdstore import metadata_modules @@ -163,6 +165,20 @@ class IdentMDB(IdentDB): return item[self.mdb.primary_key] return None + def match_local_id(self, userid, sp_name_qualifier, name_qualifier): + """ + Look for an existing persistent NameID matching userid, + sp_name_qualifier and name_qualifier. + """ + filter = {"name_id.sp_name_qualifier": sp_name_qualifier, + "name_id.name_qualifier": name_qualifier, + "name_id.format": NAMEID_FORMAT_PERSISTENT, + } + res = self.mdb.get(value=userid, **filter) + if not res: + return None + return from_dict(res[0]["name_id"], ONTS, True) + def remove_remote(self, name_id): cnid = to_dict(name_id, MMODS, True) self.mdb.remove(name_id=cnid)
Fix compact bug where bools with fid 0 are written incorrectly. See original Java source:
@@ -399,18 +399,12 @@ class TCompactProtocol(object): self.trans.write(pack('!b', byte)) def write_bool(self, bool): - if self._bool_fid and self._bool_fid > self._last_fid \ - and self._bool_fid - self._last_fid <= 15: - if bool: - ctype = CompactType.TRUE - else: - ctype = CompactType.FALSE + ctype = CompactType.TRUE if bool else CompactType.FALSE + if self._bool_fid is not None: self._write_field_header(ctype, self._bool_fid) + self._bool_fid = None else: - if bool: - self.write_byte(CompactType.TRUE) - else: - self.write_byte(CompactType.FALSE) + self.write_byte(ctype) def write_i16(self, i16): write_varint(self.trans, make_zig_zag(i16, 16))
Use delete utility This was failing because SQL delete does a trickle-down delete instead of the couch's one-by-one delete
@@ -5,6 +5,7 @@ from django.test import TestCase from corehq.apps.commtrack.tests.util import make_loc from corehq.apps.domain.shortcuts import create_domain +from corehq.apps.locations.tests.util import delete_all_locations from corehq.apps.users.models import CommCareUser, WebUser from corehq.apps.users.management.commands import add_multi_location_property from corehq.util.test_utils import generate_cases @@ -24,8 +25,7 @@ class CCUserLocationAssignmentTest(TestCase): @classmethod def tearDownClass(cls): cls.domain_obj.delete() - for l in [cls.loc1, cls.loc2]: - l.delete() + delete_all_locations() def setUp(self): super(CCUserLocationAssignmentTest, self).setUp() @@ -132,8 +132,7 @@ class WebUserLocationAssignmentTest(TestCase): @classmethod def tearDownClass(cls): cls.domain_obj.delete() - for l in [cls.loc1, cls.loc2]: - l.delete() + delete_all_locations() def setUp(self): super(WebUserLocationAssignmentTest, self).setUp()
setup-certbot: Use set -x. When there's a failure, this can make it much less confusing to figure out.
@@ -44,6 +44,8 @@ if [ -n "$show_help" ]; then usage fi +set -x + CERTBOT_PATH="/usr/local/sbin/certbot-auto" # For reference https://certbot.eff.org/all-instructions/#debian-other-nginx wget -q https://dl.eff.org/certbot-auto -O "$CERTBOT_PATH"
fixed python compatibility for song.from_dict func format with black
@@ -175,7 +175,8 @@ def from_dict(cls, data: Dict[str, Any]) -> "Song": ( list_class for list_class in SongList.__subclasses__() - if list(list_class.__match_args__) == list(data["song_list"].keys()) + if list(list_class.__dataclass_fields__.keys()) + == list(data["song_list"].keys()) ) )
Update connecting.rst My suggest is append the parameter in authentication_source in authentication example, because was included in
@@ -18,10 +18,10 @@ provide the :attr:`host` and :attr:`port` arguments to connect('project1', host='192.168.1.35', port=12345) -If the database requires authentication, :attr:`username` and :attr:`password` -arguments should be provided:: +If the database requires authentication, :attr:`username`, :attr:`password` +and :attr:`authentication_source` arguments should be provided:: - connect('project1', username='webapp', password='pwd123') + connect('project1', username='webapp', password='pwd123', authentication_source='admin') URI style connections are also supported -- just supply the URI as the :attr:`host` to
Fixes determining subdomain parameter. This provider automatically appends the domain name to the subdomain. Therefore, it needs to be removed from the parameter in case the name argument contains it.
@@ -102,13 +102,13 @@ class Provider(BaseProvider): } if name: - data['subdomain'] = name + data['subdomain'] = self._subdomain_name(name) self._add_ttl(data) self._add_priority(data) response = self._get('kc2_domain_dns_set', data) - LOGGER.debug('create_record: %s', response) + LOGGER.debug('create_record response: %s', response) return True @@ -121,13 +121,11 @@ class Provider(BaseProvider): query_params['dns_records_load_type'] = rtype if name: - query_params['dns_records_load_subdomain'] = name + query_params['dns_records_load_subdomain'] = self._subdomain_name(name) if content: query_params['dns_records_load_content'] = content - LOGGER.debug('list_records filter params: %s', query_params) - payload = self._get('kc2_domain_dns_get_records', query_params) response = payload['result']['domains'][0] @@ -161,7 +159,7 @@ class Provider(BaseProvider): data['type'] = rtype if name: - data['subdomain'] = name + data['subdomain'] = self._subdomain_name(name) if content: data['content'] = content @@ -196,7 +194,7 @@ class Provider(BaseProvider): # Helpers # url param used as subaction - def _request(self, action='GET', url=None, data=None, query_params=None): + def _request(self, action='GET', url='/', data=None, query_params=None): if data is None: data = {} if query_params is None: @@ -207,7 +205,7 @@ class Provider(BaseProvider): if self.session_id: query_params['sess_id'] = self.session_id - if url: + if url is not '/': query_params['subaction'] = url response = requests.request(action, self.api_endpoint, params=query_params, @@ -233,3 +231,15 @@ class Provider(BaseProvider): def _add_priority(self, data): if self._get_lexicon_option('priority'): data['prio'] = int(self._get_lexicon_option('priority')) + + # Get the subdomain name only for the given name. + # This provider automatically suffixes the name with the domain name. + def _subdomain_name(self, name): + subdomain = self._full_name(name) + domain_suffix = '.' + self.domain + + # Remove domain name since it will be automatically added by the provider + if subdomain.endswith(domain_suffix): + subdomain = subdomain[:len(subdomain)-len(domain_suffix)] + + return subdomain
Remove BibTeX from README For a couple of reasons: It was incorrectly formatted, and so GitHub's rendering of it was garbled. But pointing to the documentation, we only need keep information on what to cite in one place and make updates easier and inconsistencies impossible.
@@ -48,31 +48,8 @@ either via a `pull request`_ or via an `issue`_. Citing Stingray --------------- -Please cite `Huppenkothen et al. (2019) -<https://ui.adsabs.harvard.edu/abs/2019ApJ...881...39H/abstract>`_ if you find this package useful in your -research. - -The BibTeX entry for the paper is:: - - @ARTICLE{2019ApJ...881...39H, - author = {{Huppenkothen}, Daniela and {Bachetti}, Matteo and {Stevens}, Abigail L. and {Migliari}, Simone and {Balm}, Paul and {Hammad}, Omar and {Khan}, Usman Mahmood and {Mishra}, Himanshu and {Rashid}, Haroon and {Sharma}, Swapnil and {Martinez Ribeiro}, Evandro and {Valles Blanco}, Ricardo}, - title = "{Stingray: A Modern Python Library for Spectral Timing}", - journal = {\apj}, - keywords = {methods: data analysis, methods: statistical, X-rays: binaries, X-rays: general, Astrophysics - Instrumentation and Methods for Astrophysics, Astrophysics - High Energy Astrophysical Phenomena}, - year = 2019, - month = aug, - volume = {881}, - number = {1}, - eid = {39}, - pages = {39}, - doi = {10.3847/1538-4357/ab258d}, -archivePrefix = {arXiv}, - eprint = {1901.07681}, - primaryClass = {astro-ph.IM}, - adsurl = {https://ui.adsabs.harvard.edu/abs/2019ApJ...881...39H}, - adsnote = {Provided by the SAO/NASA Astrophysics Data System} -} - +If you find this package useful in your research, please provide appropriate acknowledgement and citation. +`Our documentation <https://stingray.science/stingray/citing.html>`_ gives further guidance, including links to appropriate papers and convenient BibTeX entries. Contents --------
Minor updates to Workspace objects. Fixes a bug in display(); adds switch names to rendered HTML of a Switchboard; allows .value access to swichboard values (instead of just ["value"] dictionary-type access.
@@ -219,7 +219,7 @@ class Workspace(object): "templates","css") with open(_os.path.join(cssPath,"dataTable.css")) as f: script = '<style>\n' + str(f.read()) + '</style>' - _display(_HTML(script1 + script)) + _display(_HTML(script)) # Update Mathjax config -- jupyter notebooks already have this so not needed #script = '<script type="text/x-mathjax-config">\n' \ @@ -399,6 +399,8 @@ class Switchboard(_collections.OrderedDict): self.initialPositions)): if typ == "buttons": html = "<fieldset id='%s'>\n" % ID + if name: + html += "<legend>%s: </legend>" % name for k,lbl in enumerate(posLbls): checked = " checked='checked'" if k==ipos else "" html += "<label for='%s-%d'>%s</label>\n" % (ID, k,lbl) @@ -460,6 +462,12 @@ class Switchboard(_collections.OrderedDict): #self.widget.value = content _display(_HTML(content)) #self.widget) + def __getattr__(self, attr): + if attr in self: + return self[attr] + return getattr(self.__dict__,attr) + + class SwitchValue(object):
Better virtualenv setup, correct path for checked out repo
@@ -25,7 +25,7 @@ jobs: uses: actions/checkout@v2 with: repository: splunk/security-content #check out https://github.com/mitre/cti.git, defaults to HEAD - path: "security-content-TEST" + path: "security-content" - uses: actions/setup-python@v2 @@ -60,10 +60,11 @@ jobs: pwd echo "Current directory listing" cd security-content-TEST - echo "Inside of the test directory" + pwd ls -lah rm -rf venv - virtualenv --python=/usr/bin/python3 --clear venv + #virtualenv --python=/usr/bin/python3 --clear venv + python3 -m venv --clear venv source venv/bin/activate sudo python3 -m pip install -q -r requirements.txt
[IMPR] Delegate undefined method calls to requests.Response object Enable the following properties and methods: history, reason. cookies, elapsed, request, ok, is_redirect, is_permanent_redirect, apparent_encoding, links, raise_for_status
@@ -57,6 +57,12 @@ class HttpRequest: self._parsed_uri = None self._data = None + def __getattr__(self, name): + """Delegate undefined method calls to request.Response object.""" + if self.exception and name in ('content', 'status_code'): + return None + return getattr(self.data, name) + @property @deprecated('the `url` attribute', since='20201011', future_warning=True) def uri(self): # pragma: no cover @@ -153,11 +159,6 @@ class HttpRequest: """DEPRECATED. Return the HTTP response status.""" return self.status_code - @property - def status_code(self) -> Optional[int]: - """Return the HTTP response status.""" - return self.data.status_code if not self.exception else None - @property def header_encoding(self): """Return charset given by the response header.""" @@ -235,14 +236,6 @@ class HttpRequest: return self.content.decode( encoding, errors) if not self.exception else None - @property - def content(self) -> bytes: - """Return the response in bytes. - - @note: The behaviour has been changed. - """ - return self.data.content if not self.exception else None - @property def text(self) -> str: """Return the response decoded by the detected encoding."""
Stop ignoring Flake8 `First argument of a method should be named 'self'` Already no breakages of this rule in the whole of Wagtail!
@@ -16,9 +16,8 @@ python-tag = py3 # D401: First line should be in imperative mood # E501: Line too long # W503: line break before binary operator (superseded by W504 line break after binary operator) -# N805: First argument of a method should be named 'self' # N806: Variable in function should be lowercase -ignore = D100,D101,D102,D103,D105,D200,D202,D204,D205,D209,D400,D401,E501,W503,N805,N806 +ignore = D100,D101,D102,D103,D105,D200,D202,D204,D205,D209,D400,D401,E501,W503,N806 exclude = wagtail/project_template/* max-line-length = 120
utils/minions: fix localhost logic in CkMinions.connected_ids We don't need loopback addresses in the address list to find any localhost minion.
@@ -627,7 +627,7 @@ class CkMinions(object): if '127.0.0.1' in addrs: # Add in the address of a possible locally-connected minion. addrs.discard('127.0.0.1') - addrs.update(set(salt.utils.network.ip_addrs(include_loopback=include_localhost))) + addrs.update(set(salt.utils.network.ip_addrs(include_loopback=False))) if subset: search = subset for id_ in search: @@ -643,8 +643,6 @@ class CkMinions(object): continue grains = mdata.get('grains', {}) for ipv4 in grains.get('ipv4', []): - if ipv4 == '127.0.0.1' and not include_localhost: - continue if ipv4 in addrs: if show_ipv4: minions.add((id_, ipv4))
Update ext_colorlog.py Use self.Meta.config_section instead of concatenating 'log' with label.
@@ -32,6 +32,9 @@ class ColorLogHandler(LoggingLogHandler): #: The string identifier of the handler. label = "colorlog" + #: The config_section identifiying the config key. + config_section = 'log.colorlog' + #: Color mapping for each log level colors = { 'DEBUG': 'cyan', @@ -64,7 +67,7 @@ class ColorLogHandler(LoggingLogHandler): def _get_console_format(self): format = super(ColorLogHandler, self)._get_console_format() - colorize = self.app.config.get('log.{}'.format(self.Meta.label), + colorize = self.app.config.get(self.Meta.config_section, 'colorize_console_log') if sys.stdout.isatty() or 'CEMENT_TEST' in os.environ: if is_true(colorize): @@ -73,14 +76,14 @@ class ColorLogHandler(LoggingLogHandler): def _get_file_format(self): format = super(ColorLogHandler, self)._get_file_format() - colorize = self.app.config.get('log.{}'.format(self.Meta.label), + colorize = self.app.config.get(self.Meta.config_section, 'colorize_file_log') if is_true(colorize): format = "%(log_color)s" + format return format def _get_console_formatter(self, format): - colorize = self.app.config.get('log.{}'.format(self.Meta.label), + colorize = self.app.config.get(self.Meta.config_section, 'colorize_console_log') if sys.stdout.isatty() or 'CEMENT_TEST' in os.environ: if is_true(colorize): @@ -99,7 +102,7 @@ class ColorLogHandler(LoggingLogHandler): return formatter def _get_file_formatter(self, format): - colorize = self.app.config.get('log.{}'.format(self.Meta.label), + colorize = self.app.config.get(self.Meta.config_section, 'colorize_file_log') if is_true(colorize): formatter = self._meta.formatter_class(
always check from Monday if holiday in week
@@ -484,16 +484,18 @@ class GarbageCollection(RestoreEntity): first_day = day1 # Check if there are holidays within holiday offset that would fall into day1 if len(self._holidays) > 0 and self._holiday_move_offset > 0: + offset = self._holiday_move_offset + if self._holiday_in_week_move: # If holiday in week, check from Monday + offset = -day1.weekday() check_near = list( filter( - lambda date: date + relativedelta(days=self._holiday_move_offset) - >= day1 + lambda date: date + relativedelta(days=offset) >= day1 and date <= day1, self._holidays, ) ) - if len(check_near) > 0 or bool(self._holiday_in_week_move): - first_day -= relativedelta(days=self._holiday_move_offset) + if len(check_near) > 0: + first_day -= relativedelta(days=offset) while True: try: next_date = await self._async_find_candidate_date(first_day)
Changelog for 0.11.3 Test Plan: None Reviewers: alangenfeld, dgibson
# Changelog +# 0.11.3 + +### Breaking Change + +- Schedules and sensors that target a `pipeline_name` that is not present in the current repository will now error out when the repository is created. + +### New + +- Assets are now included in Dagit global search. The search bar has also been moved to the top of the app. +- [helm] `generatePostgresqlPasswordSecret` toggle was added to allow the Helm chart to reference an external secret containing the Postgresql password (thanks @PenguinToast !) +- [helm] The Dagster Helm chart is now hosted on [Artifact Hub](https://artifacthub.io/packages/search?page=1&org=dagster). +- [helm] The workspace can now be specified under `dagit.workspace`, which can be useful if you are managing your user deployments in a separate Helm release. + +### Bugfixes + +- In Dagit, toggling schedules and sensors on or off will now immediately update the green dot in the left navigation, without requiring a refresh. +- When evaluating `dict` values in `run_config` targeting `Permissive` / `dict` config schemas, the ordering is now preserved. +- Integer values for `EventMetadataEntry.int` greater than 32 bits no longer cause `dagit` errors. +- `PresetDefinition.with_additional_config` no longer errors if the base config was empty (thanks @esztermarton !) +- Fixed limitation on gRPC message size when evaluating run requests for sensors, schedules, and backfills. Previously, a gRPC error would be thrown with status code `StatusCode.RESOURCE_EXHAUSTED` for a large number of run requests, especially when the requested run configs were large. +- Changed backfill job status to reflect the number of successful runs against the number of partitions requested instead of the number of runs requested. Normally these two numbers are the same, but they can differ if a pipeline run initiated by the backfill job is re-executed manually. + +### Documentation + +- Corrections from the community - thanks @mrdavidlaing & @a-cid ! + ## 0.11.2 **Community Contributions**
split _stack into _iter_stack, _format_stack This patch splits Evaluable._stack, which formats an evaluable's stack with intermediate results, into Evaluable._iter_stack which is an iterator over the serialized dependencies formatted with back references, and _format_stack which adds the intermediate results for error message purposes.
@@ -422,12 +422,9 @@ class Evaluable(types.Singleton): log.info('total time: {:.0f}ms\n'.format(tottime/1e6) + '\n'.join('{:4.0f} {} ({} calls, avg {:.3f} per call)'.format(t / 1e6, k, n, t / (1e6*n)) for k, t, n in sorted(aggstats, reverse=True, key=lambda item: item[1]) if n)) - def _stack(self, values): - lines = [' %0 = EVALARGS'] - for (op, indices), v in zip(self.serialized, values): - lines[-1] += ' --> ' + type(v).__name__ - if numeric.isarray(v): - lines[-1] += '({})'.format(','.join(map(str, v.shape))) + def _iter_stack(self): + yield '%0 = EVALARGS' + for i, (op, indices) in enumerate(self.serialized, start=1): try: code = op.evalf.__code__ offset = 1 if getattr(op.evalf, '__self__', None) is not None else 0 @@ -436,8 +433,18 @@ class Evaluable(types.Singleton): args = map(' {}=%{}'.format, names, indices) except: args = map(' %{}'.format, indices) - lines.append(' %{} = {}:{}'.format(len(lines), op, ','.join(args))) - return lines + yield f'%{i} = {op}:{",".join(args)}' + + def _format_stack(self, values, e): + lines = [f'evaluation failed in step {len(values)}/{len(self.dependencies)+1}'] + stack = self._iter_stack() + for v, op in zip(values, stack): # NOTE values must come first to avoid popping next item from stack + s = f'{type(v).__name__}' + if numeric.isarray(v): + s += f'<{v.dtype.kind}:{",".join(str(n) for n in v.shape)}>' + lines.append(f'{op} --> {s}') + lines.append(f'{next(stack)} --> {e}') + return '\n '.join(lines) @property @replace(depthfirst=True, recursive=True) @@ -518,7 +525,7 @@ class Evaluable(types.Singleton): class EvaluationError(Exception): def __init__(self, f, values): - super().__init__('evaluation failed in step {}/{}\n'.format(len(values), len(f.dependencies)) + '\n'.join(f._stack(values))) + super().__init__(f._format_stack(values, 'ERROR')) class EVALARGS(Evaluable):
Travis: Update to exercise wa commands Add tests for additional wa commands.
@@ -33,11 +33,16 @@ env: - PEP8="cd $TRAVIS_BUILD_DIR && ./dev_scripts/pep8 wa" - NOSETESTS="nose2 -s $TRAVIS_BUILD_DIR/tests" - WORKLOAD="cd /tmp && wa run $TRAVIS_BUILD_DIR/tests/travis/idle_agenda.yaml -v -d idle_workload" + - PROCESS_CMD="$WORKLOAD && wa process -f -p csv idle_workload" + - SHOW_CMD="wa show dhrystone && wa show generic_android && wa show trace-cmd && wa show csv" + - LIST_CMD="wa list all" + - CREATE_CMD="wa create agenda dhrystone generic_android csv trace_cmd && wa create package test && wa create workload test" matrix: - TEST=$PYLINT - TEST=$PEP8 - TEST=$NOSETESTS - TEST=$WORKLOAD + - TEST="$PROCESS_CMD && $SHOW_CMD && $LIST_CMD && $CREATE_CMD" script: - echo $TEST && eval $TEST
Update test_global_search.py added 2 test cases
@@ -51,6 +51,10 @@ class TestGlobalSearch(unittest.TestCase): results = global_search.search('extraterrestrial') self.assertTrue('Carter explored themes of extraterrestrial involvement in ancient mass extinctions in this episode, the third in a trilogy.' in results[0].content) + results = global_search.search('awakens & duty & alien') + self.assertTrue('After Mulder awakens from his coma, he realizes his duty to prevent alien colonization. ' in results[0].content) + results = global_search.search('awakens & duty & otherterm') + self.assertTrue(!results[0].content) def test_update_doc(self): self.insert_test_events()
Add validator to prevent using same shared storage id in multiple shared storage An existing EFS, EBS, FSx can not be mounted to multiple mount dirs
@@ -1096,6 +1096,11 @@ class BaseClusterConfig(Resource): name_list=[storage.name for storage in self.shared_storage], resource_name="Shared Storage", ) + self._register_validator( + DuplicateNameValidator, + name_list=self.existing_fs_id_list, + resource_name="Shared Storage IDs", + ) for storage in self.shared_storage: self._register_validator(SharedStorageNameValidator, name=storage.name) if isinstance(storage, SharedFsx): @@ -1163,6 +1168,21 @@ class BaseClusterConfig(Resource): return mount_dir_list + @property + def existing_fs_id_list(self): + """Retrieve the list of IDs of EBS, FSx, EFS provided.""" + fs_id_list = [] + if self.shared_storage: + for storage in self.shared_storage: + fs_id = None + if isinstance(storage, (SharedEfs, SharedFsx)): + fs_id = storage.file_system_id + elif isinstance(storage, SharedEbs): + fs_id = storage.volume_id + if fs_id: + fs_id_list.append(fs_id) + return fs_id_list + @property def compute_subnet_ids(self): """Return the list of all compute subnet ids in the cluster."""
some print statements during validation to help with debugging
@@ -21,8 +21,10 @@ def validate_schema(REPO_PATH, detection_type, objects, verbose): #Default regex does NOT match ssa___*.yml files: "^(?!ssa___).*\.yml$" #The following search will match ssa___*.yml files: "^ssa___.*\.yml$" if detection_type.startswith("ba_"): + print("***SSA_REGEX_SET***") filename_regex = "^ssa___.*\.yml$" else: + print("***NO SSA_REGEX_SET***") filename_regex = "^(?!ssa___).*\.yml$" @@ -42,7 +44,7 @@ def validate_schema(REPO_PATH, detection_type, objects, verbose): for file in files: if re.search(filename_regex, file) is not None: manifest_files.append((path.join(root, file))) - + print(len(manifest_files)) for manifest_file in manifest_files: if verbose: print("processing manifest {0}".format(manifest_file)) @@ -68,7 +70,7 @@ def validate_schema(REPO_PATH, detection_type, objects, verbose): arr = [] arr.append(object) objects[detection_type] = arr - + print("***END OF VALIDATE SCHEMA ***") return objects, error, errors @@ -253,7 +255,7 @@ def validate_tests(REPO_PATH, object): def main(REPO_PATH, verbose): validation_objects = ['macros','lookups','stories','detections', 'ba_detections','deployments', 'tests'] - validation_objects = ['ba_detections'] + objects = {} schema_error = False
Turn a print into a logger.warning and fix some small things... * "execution" spelling. * Tweak error message a little Note though, there still are further bugs in the code. Possibly there should be a return after this?
@@ -73,8 +73,8 @@ class LaserEVM: created_account = execute_contract_creation(self, creation_code) logging.info("Finished contract creation, found {} open states".format(len(self.open_states))) if len(self.open_states) == 0: - print("No contract was created during the execution of contract creation " - "Try to increase the resources for creation exection (max-depth or create_timeout)") + logging.warning("No contract was created during the execution of contract creation " + "Increase the resources for creation execution (--max-depth or --create_timeout)") # Reset code coverage self.coverage = {} @@ -266,4 +266,3 @@ class LaserEVM: return function return hook_decorator -
chore(dcos-ui): update package Update the DC/OS UI package to include the latest fixes, improvements and features. DCOS-19404 #close
"single_source" : { "kind": "git", "git": "https://github.com/dcos/dcos-ui.git", - "ref": "00168dea379a43ec873e05373b129fce322bc7d6", - "ref_origin": "v1.11.0-rc.9" + "ref": "7756d5cdc3277103b159c3318823fba268eccdb0", + "ref_origin": "v1.11.0-rc.10" } }
Bugfix: Correct regex for will.plugins.friendly.mornin plugin I couldn't get them to work without removing the `\b` tokens at the end.
@@ -5,11 +5,11 @@ from will.decorators import respond_to, periodic, hear, randomly, route, rendere class MorninEveninPlugin(WillPlugin): - @hear("^(good )?(morning?)\b") + @hear("^(good )?(morning?)") def morning(self, message): self.say("mornin', %s" % message.sender.nick, message=message) - @hear("^(good ?|g')?('?night)\b") + @hear("^(good ?|g')?('?night)") def good_night(self, message): now = datetime.datetime.now() if now.weekday() == 4: # Friday
Stopgap solution for test_training_determinism * Added eq to fix training determinism failure * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see * Update num samples so no nans
@@ -73,7 +73,7 @@ def train_twice(backend, csv_filename, tmpdir): config = {"input_features": input_features, "output_features": output_features, TRAINER: {"epochs": 2}} # Generate training data - training_data_csv_path = generate_data(input_features, output_features, csv_filename) + training_data_csv_path = generate_data(input_features, output_features, csv_filename, num_examples=100) ludwig_model_1 = LudwigModel(config, logging_level=logging.ERROR, backend=backend) ludwig_model_2 = LudwigModel(config, logging_level=logging.ERROR, backend=backend)
Updated readme for vae Added colab badge for compare results
# Pyprobml VAE +<a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/scripts/vae/compare_results.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> + A collection of Variational AutoEncoders (VAEs) implemented in pytorch with focus on reproducibility and creating reusable blocks that can be used in any project. The aim of this project is to provide a quick and simple working example for many of the cool VAE idea in the textbook. All the models are trained on the [CelebA dataset](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) for consistency and comparison.
added features to get index of files in `CdsHeader.write` [skip ci]
@@ -41,7 +41,8 @@ cdsdicts = {'title': 'Title ?', 'abstract': 'Abstract ?', 'authors': 'Authors ?', 'bibcode': 'ref ?', - 'keywords': '' + 'keywords': '', + 'tableDescription': '' } ByteByByteTemplate = ["Byte-by-byte Description of file: $file", @@ -51,6 +52,8 @@ ByteByByteTemplate = ["Byte-by-byte Description of file: $file", "$bytebybyte", "--------------------------------------------------------------------------------"] +ReadMeTemplate = "src/ReadMe.template" + class CdsSplitter(fixedwidth.FixedWidthSplitter): """ @@ -479,6 +482,9 @@ class CdsHeader(core.BaseHeader): else: buff += newline + "\n" + # last value of ``endb`` is the max column width after formatting. + self.linewidth = endb + # add column notes to ByteByByte notes = self.cdsdicts.get('notes', None) if notes is not None: @@ -499,6 +505,37 @@ class CdsHeader(core.BaseHeader): 'bytebybyte': self.writeByteByByte()}) lines.append(ByteByByte) + #-- get index of files --# + sz = [14, 0, 8] + l = len(str(self.linewidth)) + if l > sz[1]: + sz[1] = l + fIndexRows = (["ReadMe", + MAX_SIZE_README_LINE, + ".", + "this file"], + ['table', + self.linewidth, + str(len(self.cols[0])), + self.cdsdicts['tableDescription']]) + indexfmt = "{0:" + str(sz[0]) + "s} {1:" + str(sz[1]) + "d} {2:>" + str(sz[2]) + "d} {3:s}" + filesIndex = Table(names=['FileName', 'Lrecl', 'Records', 'Explanations'], + rows=fIndexRows) + fIndexLines = StringIO() + filesIndex.write(fIndexLines, format='ascii.fixed_width_no_header', + delimiter=' ', bookend=False, delimiter_pad=None, + formats={'FileName': str(sz[0])+'s', + 'Lrecl': ''+str(sz[1])+'d', + 'Records': '>8s', + 'Explanations': 's'}) + fIndexLines = fIndexLines.getvalue().splitlines() + + # fill up the full ReadMe + rmTemplate = Template(ReadMeTemplate) + print(self.cdsdicts) + ReadMe = rmTemplate.substitute({}) + print(rmTemplate) + class CdsData(fixedwidth.FixedWidthData): """CDS table data reader
bump lower bound of typer version by 2 releases typer <0.4.1 is incompatible with click >=8.1 fixes
@@ -55,7 +55,7 @@ typing-extensions = {version = ">=3.7.4, <5.0", python = "<3.8"} docutils = ">=0.7, <0.19" types-docutils = ">=0.18, <0.19" pydantic = ">=1.2, <2.0" -typer = {extras = ["all"], version = ">=0.3.2,<0.7"} +typer = {extras = ["all"], version = ">=0.4.1,<0.7"} [tool.poetry.dev-dependencies] pre-commit = ">=2.17"
Fixes typos Addresses
@@ -182,7 +182,7 @@ factorizes into a product of two amplitudes. In Ref. :cite:`quesada2019franck` i Gaussian states and threshold detection *************************************** -In the last section we sketched how to obtain the probability that a certain photon-number outcome is obtained when a Gaussian state is measured with photon-number detectors. In this section we show how to obtain the analogous probability for for the case of threshold detectors. These binary outcome detectors can only distinguish between the the vacuum state and occupied states, and thus for a single mode they are described by the POVM elements +In the last section we sketched how to obtain the probability that a certain photon-number outcome is obtained when a Gaussian state is measured with photon-number detectors. In this section we show how to obtain the analogous probability for the case of threshold detectors. These binary outcome detectors can only distinguish between the the vacuum state and occupied states, and thus for a single mode they are described by the POVM elements .. math:: \hat{\Pi}_0^{(i)} = \ket{0_i} \bra{0_i} \text{ and } \hat{\Pi}_1^{(i)} = 1_i - \hat{\Pi}_0^{(i)},
pywikibot: Add support for chunked uploading in imagetransfer.py Support for -asynchronous and -chunk_size flags were added to upload.py in This patch adds the same options to imagetransfer.py and just maps them to the corresponding UploadBot parameters.
@@ -22,6 +22,10 @@ The following parameters are supported: -force_if_shared Upload the file to the target, even if it exists on that wiki's shared repo + -asynchronous Upload to stash. + + -chunk_size:n Upload in chunks of n bytes. + -file:z Upload many files from textfile: [[Image:x]] [[Image:y]] @@ -144,6 +148,8 @@ class ImageTransferBot(SingleSiteBot, ExistingPageBot): 'keepname': False, 'target': None, 'force_if_shared': False, + 'asynchronous': False, + 'chunk_size': 0, } def __init__(self, **kwargs): @@ -162,6 +168,10 @@ class ImageTransferBot(SingleSiteBot, ExistingPageBot): shared to the target site (e.g. when moving from Commons to another wiki) :type force_if_shared: boolean + :keyword asynchronous: Upload to stash. + :type asynchronous: boolean + :keyword chunk_size: Upload in chunks of this size bytes. + :type chunk_size: integer """ super().__init__(**kwargs) if self.opt.target is None: @@ -217,7 +227,9 @@ class ImageTransferBot(SingleSiteBot, ExistingPageBot): keep_filename=self.opt.keepname, verify_description=not self.opt.keepname, ignore_warning=self.opt.ignore_warning, - force_if_shared=self.opt.force_if_shared) + force_if_shared=self.opt.force_if_shared, + asynchronous=self.opt.asynchronous, + chunk_size=self.opt.chunk_size) # try to upload if bot.skip_run(): @@ -347,7 +359,7 @@ def main(*args: str) -> None: for arg in local_args: opt, _, value = arg.partition(':') if opt in ('-ignore_warning', '-interwiki', '-keepname', - '-force_if_shared'): + '-force_if_shared', '-asynchronous'): options[opt[1:]] = True elif opt == '-tolang': target_code = value @@ -355,6 +367,8 @@ def main(*args: str) -> None: target_family = value elif opt == '-tosite': options['target'] = value + elif opt == '-chunk_size': + options['chunk_size'] = value else: generator_factory.handle_arg(arg)
remove early layout termination in ListView As we go to the end anyway (because we need to update visibleInView for tail delegates), remove early termination. Remove relative size correction, use ref size for that. Make scrolling very accurate (no annoying jumps when scrolling over invisible/non-uniformly sized delegates)
@@ -192,6 +192,12 @@ BaseView { var prerender = noPrerender? 0: this.prerender * size var leftMargin = -prerender var rightMargin = size + prerender + + if (sizes.length > items.length) { + ///fixme: override model update api to make sizes stable + sizes = sizes.slice(0, items.length) + } + if (this._scrollDelta != 0) { if (this.nativeScrolling) { if (horizontal) @@ -209,9 +215,8 @@ BaseView { function(item) { return item.width }: function(item) { return item.height } - var itemsCount = 0 var refSize - for(var i = 0; i < n && (refSize === undefined || p + c < rightMargin); ++i, ++itemsCount) { + for(var i = 0; i < n; ++i) { var item = items[i] var viewPos = p + c @@ -232,14 +237,11 @@ BaseView { } } - if (!item && visibleInModel) { - //we can render, or no sizes available - if (renderable || s === undefined) { + if (!item && visibleInModel && (renderable || s === undefined)) { item = this._createDelegate(i) if (item) created = true } - } if (item && visibleInModel) s = refSize = sizes[i] = getItemSize(item) @@ -275,7 +277,7 @@ BaseView { } } } else { - var nextP = p + refSize + var nextP = p + s if (horizontal) { if (nextP > maxW) maxW = nextP @@ -287,28 +289,9 @@ BaseView { p += s + this.spacing } - for( ;i < n; ++i) { - var item = items[i] - if (item) { - item.visibleInView = false - if (discardDelegates) { - this._discardItem(item) - items[i] = null - created = true - } - } - } if (p > startPos) p -= this.spacing; - if (sizes.length > items.length) { - ///fixme: override model update api to make sizes stable - sizes = sizes.slice(0, items.length) - } - - if (itemsCount) - p *= items.length / itemsCount - if (this.trace) log('result: ' + p + ', max: ' + maxW + 'x' + maxH) if (horizontal) {
Temporarily pin Pythran version at 0.9.7 to resolve test failures. Mostly with the hope that the tests pass. Ideally we should support the most recent version.
@@ -150,7 +150,7 @@ script: else STYLE_ARGS=--no-code-style; if $PYTHON_DBG -V >&2; then CFLAGS="-O0 -ggdb" $PYTHON_DBG runtests.py -vv --no-code-style Debugger --backends=$BACKEND; fi; - if [ -z "${BACKEND##*cpp*}" -a -n "${TRAVIS_PYTHON_VERSION##*-dev}" ]; then pip install pythran; fi; + if [ -z "${BACKEND##*cpp*}" -a -n "${TRAVIS_PYTHON_VERSION##*-dev}" ]; then pip install pythran==0.9.7; fi; if [ "$BACKEND" != "cpp" -a -n "${TRAVIS_PYTHON_VERSION##2*}" -a -n "${TRAVIS_PYTHON_VERSION##pypy*}" -a -n "${TRAVIS_PYTHON_VERSION##*-dev}" -a -n "${TRAVIS_PYTHON_VERSION##*3.4}" ]; then pip install mypy; fi; fi # Need to clear the ccache? Try something like this:
test_installation: Updated to most recent Inkscape releases Covers Inkscape 1.0, 1.1 and 1.2, added some comments
@@ -76,12 +76,18 @@ REQUIREMENT_CHECK_ERROR = 65 good_configurations = [] +# Definition of working combinations of Inkscape and LaTeX for latex in [("pdflatex",), ("lualatex",), ("xelatex",)]: - good_configurations.append([("inkscape", "Inkscape 1.0alpha2 (883c7bc, 2019-06-02)"), latex]) + good_configurations.append([("inkscape", "Inkscape 1.0 (4035a4fb49, 2020-05-01)"), latex]) + good_configurations.append([("inkscape", "Inkscape 1.1.1 (3bf5ae0d25, 2021-09-20)"), latex]) + good_configurations.append([("inkscape", "Inkscape 1.2-dev (1dd7bebcbd, 2021-12-20)"), latex]) +# Test: Installation of working combinations must succeed for good_configuration in good_configurations: test_configuration(good_configuration, REQUIREMENT_CHECK_SUCCESS) +# Test: If one component of the working combinations is missing +# installation must fail for good_configuration in good_configurations: for i in range(len(good_configuration)): # good configuration without one element is bad @@ -89,24 +95,24 @@ for good_configuration in good_configurations: print(bad_configuration) test_configuration(bad_configuration, REQUIREMENT_CHECK_ERROR) +# Test wrong Inkscape version and no pdflatex installed test_configuration([ ("inkscape", "Inkscape 0.92.3 (2405546, 2018-03-11)"), ], REQUIREMENT_CHECK_ERROR) +# Test wrong Inkscape version and pdflatex installed test_configuration([ ("inkscape", "Inkscape 0.92.3 (2405546, 2018-03-11)"), ("pdflatex",) ], REQUIREMENT_CHECK_ERROR) +# Test what's happening when no version information +# is returned by Inkscape test_configuration([ ("inkscape",), ("pdflatex",), ], REQUIREMENT_CHECK_UNKNOWN) -test_configuration([ - ("inkscape", "Inkscape 1.0beta2 (2b71d25, 2019-12-03)"), - ("pdflatex",), -], REQUIREMENT_CHECK_SUCCESS) - +# Test: Nothing is installed -> Installation must fail test_configuration([ ], REQUIREMENT_CHECK_ERROR)
Rollback screenshot:true hack This feature will have to wait until I'm storing screenshots in a single field with sub fields.
@@ -30,8 +30,6 @@ class Elastic: return 0,[] if query == '': query = 'nmap' - if query == 'screenshot:true': - query = "httpheadshot:* OR httpsheadshot:* OR vncsheadshot:*" try: result = self.es.search(index="nmap", doc_type="_doc", body={"size": limit, "from": offset, "query": {"query_string": { 'query': query, "fields": ["nmap_data"], "default_operator": "AND"}}, "sort": {"ctime": {"order": "desc"}}})
Fixed Twilio test function * Fixed Teilio test function Fixes * CR fixes
@@ -45,11 +45,11 @@ configuration: required: false script: script: |+ - var sendRequest = function(body) { + var sendRequest = function(method, url, body) { var res = http( params.server.replace(/[\/]+$/, '')+'/'+params.sid+'/Messages.json', { - Method: 'POST', + Method: method, Body: body, Headers: {'Content-Type': ['application/x-www-form-urlencoded']}, Username: params.sid, @@ -71,10 +71,13 @@ script: switch (command) { // This is the call made when pressing the integration test button. case 'test-module': - sendRequest(encodeToURLQuery(params.from,params.from,'Test')); + // from Twilio instructions, in order to ger the list of accounts, the below curl command should be used. + // using this as a method to test tht the credentails and connectivity are good, without using sms buckets + // curl -G https://api.twilio.com/2010-04-01/Accounts -u '[YOUR ACCOUNT SID]:[YOUR AUTH TOKEN]' + sendRequest('GET', params.server.replace(/[\/]+$/, ''), ""); return 'ok'; default: - sendRequest(encodeToURLQuery(args.to, (args.from ? args.from : params.from), args.body)); + sendRequest('POST', params.server.replace(/[\/]+$/, '')+'/'+params.sid+'/Messages.json', encodeToURLQuery(args.to, (args.from ? args.from : params.from), args.body)); return 'Message sent successfully!'; } @@ -93,3 +96,4 @@ script: description: send an SMS hidden: false fromversion: 2.5.0 +releaseNotes: Test action checks credentials only now \ No newline at end of file
test_dbcl002_cluster_by_row_or_col passes More specifically, values for col_dendro_traces_{min,max} are specified even if there are no dendrograms to plot (no traces).
@@ -493,12 +493,13 @@ Methods: cluster_curve_numbers[len(fig.data)] = ["row", i] fig.append_trace(rdt, 2, 1) - col_dendro_traces_min_y = np.concatenate( - [r["y"] for r in col_dendro_traces] - ).min() - col_dendro_traces_max_y = np.concatenate( - [r["y"] for r in col_dendro_traces] - ).max() + col_dendro_traces_y = [r["y"] for r in col_dendro_traces] + # arbitrary extrema if col_dendro_traces_y is empty + col_dendro_traces_min_y = 0 + col_dendro_traces_max_y = 1 + if len(col_dendro_traces_y): + col_dendro_traces_min_y = np.concatenate(col_dendro_traces_y).min() + col_dendro_traces_max_y = np.concatenate(col_dendro_traces_y).max() # ensure that everything is aligned properly # with the heatmap
Fix typo introduced in CR:
@@ -33,7 +33,7 @@ Usage: e.g., to test this project against Python 2.7 and Python 3.6, you would: - ../scripts/jenkins/runner.sh /usr/local/bin/python2.7 + ./scripts/jenkins/runner.sh /usr/local/bin/python2.7 and
Cycles ShaderNetworkAlgo : Simplify shader predicates Remove the variables that are only used in one place. Move declaration to the point where the variable is actually needed, so the "reuse previously created node" section is doing one thing only.
@@ -153,11 +153,6 @@ ccl::ShaderNode *convertWalk( const ShaderNetwork::Parameter &outputParameter, c { // Reuse previously created node if we can. const IECoreScene::Shader *shader = shaderNetwork->getShader( outputParameter.shader ); - const bool isOSLShader = boost::starts_with( shader->getType(), "osl:" ); - const bool isConverter = boost::starts_with( shader->getName(), "convert" ); - const bool isAOV = boost::starts_with( shader->getType(), "cycles:aov:" ); - const bool isImageTexture = shader->getName() == "image_texture"; - auto inserted = converted.insert( { outputParameter.shader, nullptr } ); ccl::ShaderNode *&node = inserted.first->second; if( !inserted.second ) @@ -167,6 +162,8 @@ ccl::ShaderNode *convertWalk( const ShaderNetwork::Parameter &outputParameter, c // Create node for shader. + const bool isOSLShader = boost::starts_with( shader->getType(), "osl:" ); + if( isOSLShader ) { if( shaderManager && shaderManager->use_osl() ) @@ -181,7 +178,7 @@ ccl::ShaderNode *convertWalk( const ShaderNetwork::Parameter &outputParameter, c return node; } } - else if( isConverter ) + else if( boost::starts_with( shader->getName(), "convert" ) ) { /// \todo Why can't this be handled by the generic case below? There are NodeTypes /// registered for each of these conversions, so `NodeType::find()` does work. The @@ -222,6 +219,8 @@ ccl::ShaderNode *convertWalk( const ShaderNetwork::Parameter &outputParameter, c // Set the shader parameters + const bool isImageTexture = shader->getName() == "image_texture"; + for( const auto &namedParameter : shader->parameters() ) { // We needed to change any "." found in the socket input names to @@ -364,7 +363,7 @@ ccl::ShaderNode *convertWalk( const ShaderNetwork::Parameter &outputParameter, c } } - if( shaderNetwork->outputShader() == shader && !isAOV ) + if( shaderNetwork->outputShader() == shader && !boost::starts_with( shader->getType(), "cycles:aov:" ) ) { // Connect to the main output node of the cycles shader graph. // Either cycles:surface, cycles:volume or cycles:displacement.
vivisect: don't configure logging within library code this stomps on the configuration provided by applications
@@ -51,12 +51,6 @@ import vivisect.analysis.generic.emucode as v_emucode logger = logging.getLogger(__name__) -e_common.setLogging(logger, 'WARNING') - -# FIXME: UGH. Due to our package structure, we have no centralized logging leve -# so we have to force it here and a few other places -elog = logging.getLogger(envi.__name__) -elog.setLevel(logger.getEffectiveLevel()) STOP_LOCS = (LOC_STRING, LOC_UNI, LOC_STRUCT, LOC_CLSID, LOC_VFTABLE, LOC_IMPORT, LOC_PAD, LOC_NUMBER)
When building relayout/restyle strings don't use brackets 'foo[0].bar' -> 'foo.0.bar' For unknown reasons, this caused a jitter in the datashader example
@@ -2733,7 +2733,7 @@ class BasePlotlyType: child_prop_val = getattr(self, child.plotly_name) if isinstance(child_prop_val, (list, tuple)): child_ind = BaseFigure._index_is(child_prop_val, child) - obj_path = '{child_name}[{child_ind}].{prop}'.format( + obj_path = '{child_name}.{child_ind}.{prop}'.format( child_name=child.plotly_name, child_ind=child_ind, prop=prop_path_str)
disable non-deterministic cudnn ctcloss Summary: Associated issue: Pull Request resolved:
@@ -364,7 +364,9 @@ Tensor ctc_loss(const Tensor& log_probs, const Tensor& targets, IntArrayRef inpu Tensor res; if (use_cudnn) { - res = std::get<0>(at::_cudnn_ctc_loss(log_probs, targets, input_lengths, target_lengths, BLANK, ctx.deterministicCuDNN(), zero_infinity)); + // non-deterministic ctc loss on cudnn disabled due to inconsistent results + // see: https://github.com/pytorch/pytorch/issues/21680 + res = std::get<0>(at::_cudnn_ctc_loss(log_probs, targets, input_lengths, target_lengths, BLANK, /*deterministic=*/true, zero_infinity)); } else { res = std::get<0>(at::_ctc_loss(log_probs, targets, input_lengths, target_lengths, BLANK, zero_infinity)); if (zero_infinity) {
Update process_results.py Update to allow for scenario with zero PV
@@ -339,9 +339,9 @@ def process_results(self, dfm_list, data, meta, saveToDB=True): self.inputs["Storage"]["installed_cost_us_dollars_per_kwh"], self.inputs["Storage"]["replace_cost_us_dollars_per_kwh"], self.inputs["Storage"]["battery_replacement_year"]) - pv_om = 0 + pv_om = 0.0 for i in range(len(self.nested_outputs['Scenario']['Site']['PV'])): - pv_om += self.nested_outputs['Scenario']['Site']['PV'][i]['total_fixed_om_cost_us_dollars'] + pv_om += (self.nested_outputs['Scenario']['Site']['PV'][i]['total_fixed_om_cost_us_dollars'] or 0.0) diesel_variable_om = (self.nested_outputs['Scenario']['Site']['Generator']['total_variable_om_cost_us_dollars'] or 0) diesel_fixed_om = (self.nested_outputs['Scenario']['Site']['Generator']['total_fixed_om_cost_us_dollars'] or 0)
refactor/Composition/input_parsing Modularized input parsing by transferring processing steps from monolithic to targeted methods Vis a vis the above, fully replaced _adjust_stimulus_dict and _adjust_execution_stimuli methods
@@ -59,7 +59,7 @@ class CompositionRunner(): # 3) Resize inputs to be of the form [[[]]], # where each level corresponds to: <TRIALS <PORTS <INPUTS> > > - inputs,_ = self._composition._standardize_input_dict(inputs) + inputs,_ = self._composition._parse_dict(inputs) return inputs def _infer_target_nodes(self, targets: dict):
Update indy_node/test/nym_txn/test_nym_auth_rules.py clarify comment
@@ -334,7 +334,7 @@ def test_nym_edit( if editor.verkey is None: # skip that as well since it doesn't make sense return - if not ROLE in edit_op: # skip if the update operation doesn't changes neither role nor verkey + if not ROLE in edit_op: # skip if the update operation changes neither role nor verkey if not VERKEY in edit_op: return
Update generic.txt Moving from ```domain.txt```
@@ -3655,3 +3655,19 @@ wagenstead.xyz # Reference: https://twitter.com/eComscan/status/1136181192796061697 dns-forwarding.com + +# Reference: # Reference: https://speakerdeck.com/ashley920/into-the-fog-the-return-of-icefog-apt?slide=35 + +dnsedc.com + +# Reference: # Reference: https://speakerdeck.com/ashley920/into-the-fog-the-return-of-icefog-apt?slide=35 + +dellnewsup.net + +# Reference: https://twitter.com/0xrb/status/1135869164239769601 (# root domain) + +yiffgallery.xyz + +# Reference: https://www.virustotal.com/gui/domain/sportsnewsa.net/relations + +sportsnewsa.net
bug fix in TotalCHPStandbyCharges calculation rm extra ")"
@@ -87,7 +87,6 @@ function add_cost_expressions(m, p) if !isempty(p.CHPTechs) m[:TotalCHPStandbyCharges] = @expression(m, p.CHPStandbyCharge * 12 * sum( m[:dvSize][t] for t in p.CHPTechs) * p.pwf_e) - ) else m[:TotalCHPStandbyCharges] = @expression(m, 0.0) end
Minor fix in segway example using mean instead of sum and division
@@ -50,7 +50,7 @@ def experiment(alg, params, n_epochs, n_episodes, n_ep_per_fit): print('mu: ', p[:n_weights]) print('sigma: ', p[n_weights:]) print('Reward at iteration ' + str(i) + ': ' + - str(np.sum(J)/n_episodes)) + str(np.mean(J))) print('Press a button to visualize the segway...') input()
optimize firewalld.present open port handling only call firewalld ports functions if necessary Fixes
@@ -498,6 +498,7 @@ def _present(name, 'new': 'Masquerading successfully ' 'disabled.'}}) + if ports or prune_ports: ports = ports or [] try: _current_ports = __salt__['firewalld.list_ports'](name, permanent=True)
workloads/gfxbench: Do not clear package data on launch By clearing the application data each time the workload is run this forces the required assets to be re-installed each time. As the workload is not affected by persistent state do not perform the clearing.
@@ -21,6 +21,7 @@ class Gfxbench(ApkUiautoWorkload): name = 'gfxbench-corporate' package_names = ['net.kishonti.gfxbench.gl.v50000.corporate'] + clear_data_on_reset = False regex_matches = [re.compile(r'Car Chase score (.+)'), re.compile(r'Car Chase Offscreen score (.+)'), re.compile(r'Manhattan 3.1 score (.+)'),
Minor fix in documentation Changed name to serializable tutorial
-How to use the Serializable interface -====================================== +How to Save and Load (Serializable interface) +============================================= -In this tutorial, we explain in detail the ``Serializable`` interface. We first explain how to use classes -implementing the ``Serializable`` interface, and then we provide a small example of how to implement the -``Serializable`` interface on a custom class to serialize the object properly on disk. +In this tutorial, we explain in detail the ``Serializable`` interface, i.e. the interface to save and load classes from +disk. We first explain how to use classes implementing the ``Serializable`` interface, and then we provide a small +example of how to implement the ``Serializable`` interface on a custom class to serialize the object properly on disk. The Mushroom RL save format (extension ``.msh``) is nothing else than a zip file, containing some information (stored into the ``config`` file) to load the object. This information can be accessed easily and you can try to recover the information
Add ability to filter ratings by score in the ratings admin Add ability to filter ratings by score in the ratings admin Also fix querysets to make fewer queries making the whole thing less painful for the database.
from django.contrib import admin +from django.db.models import Prefetch from django.utils.html import format_html from django.utils.translation import ugettext_lazy as _ from django.urls import reverse +from olympia.addons.models import Addon from olympia.translations.utils import truncate_text from olympia.zadmin.admin import related_single_content_link @@ -52,11 +54,20 @@ class RatingAdmin(admin.ModelAdmin): 'user_link', 'deleted') list_display = ('id', 'addon', 'created', 'user', 'ip_address', 'rating', 'is_reply', 'flag', 'deleted', 'truncated_body',) - list_filter = ('deleted', RatingTypeFilter) + list_filter = ('deleted', RatingTypeFilter, 'rating') actions = ('delete_selected',) + list_select_related = ('user',) # For addon/reply_to see get_queryset() + + def get_queryset(self, request): + base_qs = Rating.unfiltered.all() + return base_qs.prefetch_related( + Prefetch( + 'addon', queryset=Addon.unfiltered.all().only_translations()), + Prefetch('reply_to', queryset=base_qs), + ) - def queryset(self, request): - return Rating.unfiltered.all() + def has_add_permission(self, request): + return False def truncated_body(self, obj): return truncate_text(obj.body, 140)[0] if obj.body else ''
Fixed missing format for the Cat# in the distributors. Only for URLs Fixes
@@ -984,9 +984,9 @@ def add_dist_to_worksheet(ss, logger, columns_global, start_row, start_col, unit dist_url = part.url.get(dist) if dist_url: if ss.SUPPRESS_CAT_URL: - ss.write_url(row, col_part_num, dist_url, string=dist_part_num) + ss.write_url(row, col_part_num, dist_url, 'part_format', dist_part_num) else: - ss.write_url(row, start_col + columns['link']['col'], dist_url) + ss.write_url(row, start_col + columns['link']['col'], dist_url, 'part_format') # Available parts for this distributor unless it is None which means the part is not stocked. col_avail = start_col + columns['avail']['col']
add function.polyfunc convenience function The `function.polyfunc` function takes a list of polynomial coefficients and dofs per transform and creates an inflated `function.Polyval`.
@@ -3154,6 +3154,34 @@ def function(fmap, nmap, ndofs): func = Function(stds=[fmap[trans] for trans in transforms], depth=depth, trans=promote, index=index) return Inflate(func, dofmap, ndofs, axis=0) +def polyfunc(coeffs, dofs, ndofs, transforms, *, issorted=True): + ''' + Create an inflated :class:`Polyval` with coefficients ``coeffs`` and + corresponding dofs ``dofs``. The arguments ``coeffs``, ``dofs`` and + ``transforms`` are assumed to have matching order. In addition, if + ``issorted`` is true, the ``transforms`` argument is assumed to be sorted. + ''' + + transforms = tuple(transforms) + if issorted: + dofs = tuple(dofs) + coeffs = tuple(coeffs) + else: + dofsmap = dict(zip(transforms, dofs)) + coeffsmap = dict(zip(transforms, coeffs)) + transforms = tuple(sorted(transforms)) + dofs = tuple(dofsmap[trans] for trans in transforms) + coeffs = tuple(coeffsmap[trans] for trans in transforms) + fromdims, = set(transform.fromdims for transform in transforms) + promote = Promote(fromdims, trans=TRANS) + index = FindTransform(transforms, promote) + dofmap = DofMap(dofs, index=index) + depth = Get([len(trans) for trans in transforms]+[builtins.min(map(len, transforms))], axis=0, item=index) + points = RootCoords(fromdims, TailOfTransform(promote, depth)) + z = numeric.const(numpy.zeros((0,)+coeffs[0].shape[1:])) + func = Polyval(Elemwise(coeffs+(z,), index, dtype=float), points, fromdims) + return Inflate(func, dofmap, ndofs, axis=0) + def elemwise( fmap, shape, default=None ): if default is not None: raise NotImplemented('default is not supported anymore')
readme: update git install instructions to use pip wrapper Which will force install from snakeoil git as required by the build/install deps in the repo. [skip ci]
@@ -46,7 +46,8 @@ Installing latest pypi release in a virtualenv:: Installing from git in a virtualenv:: - pip install https://github.com/pkgcore/pkgcore/archive/master.tar.gz + git clone https://github.com/pkgcore/pkgcore.git + ./pkgcore/requirements/pip.sh ./pkgcore Installing from a tarball or git repo::
Update PULL_REQUEST_TEMPLATE.md Remove extra ] in markdown link
## Description Thanks for contributing to tskit! :heart: -A guide to the PR process is [here]](https://tskit.readthedocs.io/en/latest/development.html#development_workflow_git) +A guide to the PR process is [here](https://tskit.readthedocs.io/en/latest/development.html#development_workflow_git) Please replace this text with a summary of the change and which issue is fixed, if any. Please also include relevant motivation and context. Fixes #(issue) <- Putting the issue number here will auto-close the issue when this PR is merged
Build and run OCaml programs separately TN:
@@ -264,9 +264,13 @@ def build_and_run(grammar, py_script=None, ada_main=None, lexer=None, with open('dune-project', 'w') as f: f.write('(lang dune 1.6)') - run('dune', 'exec', '--display', 'quiet', '--root', '.', + # Build the ocaml executable + run('dune', 'build', '--display', 'quiet', '--root', '.', './{}.exe'.format(ocaml_main)) + # Run the ocaml executable + run('./_build/default/{}.exe'.format(ocaml_main)) + def add_gpr_path(dirname): """
Modify the url of OVN tutorial Original url leads user to a markdown file, I think it's better to use official html document rather than a markdown file.
@@ -7,4 +7,4 @@ OpenStack and OVN Tutorial The OVN project documentation includes an in depth tutorial of using OVN with OpenStack. -`OpenStack and OVN Tutorial <https://github.com/ovn-org/ovn/blob/master/Documentation/tutorials/ovn-openstack.rst>`_ +`OpenStack and OVN Tutorial <https://docs.ovn.org/en/latest/tutorials/ovn-openstack.html>`_
README.md exists but content is empty.
Downloads last month
6