SEARCH
NEW RPMS
DIRECTORIES
ABOUT
FAQ
VARIOUS
BLOG

 
 
Changelog for python-dask-0.15.1-1.5.noarch.rpm :
Thu May 4 14:00:00 2017 toddrme2178AATTgmail.com
- Implement single-spec version
- Update source URL.
- Split classes into own subpackages to lighten base dependencies.
- Update to version 0.15.1

* Add storage_options to to_textfiles and to_csv (:pr:`2466`)

* Rechunk and simplify rfftfreq (:pr:`2473`), (:pr:`2475`)

* Better support ndarray subclasses (:pr:`2486`)

* Import star in dask.distributed (:pr:`2503`)

* Threadsafe cache handling with tokenization (:pr:`2511`)
- Update to version 0.15.0
+ Array

* Add dask.array.stats submodule (:pr:`2269`)

* Support ``ufunc.outer`` (:pr:`2345`)

* Optimize fancy indexing by reducing graph overhead (:pr:`2333`) (:pr:`2394`)

* Faster array tokenization using alternative hashes (:pr:`2377`)

* Added the matmul ``AATT`` operator (:pr:`2349`)

* Improved coverage of the ``numpy.fft`` module (:pr:`2320`) (:pr:`2322`) (:pr:`2327`) (:pr:`2323`)

* Support NumPy\'s ``__array_ufunc__`` protocol (:pr:`2438`)
+ Bag

* Fix bug where reductions on bags with no partitions would fail (:pr:`2324`)

* Add broadcasting and variadic ``db.map`` top-level function. Also remove
auto-expansion of tuples as map arguments (:pr:`2339`)

* Rename ``Bag.concat`` to ``Bag.flatten`` (:pr:`2402`)
+ DataFrame

* Parquet improvements (:pr:`2277`) (:pr:`2422`)
+ Core

* Move dask.async module to dask.local (:pr:`2318`)

* Support callbacks with nested scheduler calls (:pr:`2397`)

* Support pathlib.Path objects as uris (:pr:`2310`)
- Update to version 0.14.3
+ DataFrame

* Pandas 0.20.0 support
- Update to version 0.14.2
+ Array

* Add da.indices (:pr:`2268`), da.tile (:pr:`2153`), da.roll (:pr:`2135`)

* Simultaneously support drop_axis and new_axis in da.map_blocks (:pr:`2264`)

* Rechunk and concatenate work with unknown chunksizes (:pr:`2235`) and (:pr:`2251`)

* Support non-numpy container arrays, notably sparse arrays (:pr:`2234`)

* Tensordot contracts over multiple axes (:pr:`2186`)

* Allow delayed targets in da.store (:pr:`2181`)

* Support interactions against lists and tuples (:pr:`2148`)

* Constructor plugins for debugging (:pr:`2142`)

* Multi-dimensional FFTs (single chunk) (:pr:`2116`)
+ Bag

* to_dataframe enforces consistent types (:pr:`2199`)
+ DataFrame

* Set_index always fully sorts the index (:pr:`2290`)

* Support compatibility with pandas 0.20.0 (:pr:`2249`), (:pr:`2248`), and (:pr:`2246`)

* Support Arrow Parquet reader (:pr:`2223`)

* Time-based rolling windows (:pr:`2198`)

* Repartition can now create more partitions, not just less (:pr:`2168`)
+ Core

* Always use absolute paths when on POSIX file system (:pr:`2263`)

* Support user provided graph optimizations (:pr:`2219`)

* Refactor path handling (:pr:`2207`)

* Improve fusion performance (:pr:`2129`), (:pr:`2131`), and (:pr:`2112`)
- Update to version 0.14.1
+ Array

* Micro-optimize optimizations (:pr:`2058`)

* Change slicing optimizations to avoid fusing raw numpy arrays (:pr:`2075`)
(:pr:`2080`)

* Dask.array operations now work on numpy arrays (:pr:`2079`)

* Reshape now works in a much broader set of cases (:pr:`2089`)

* Support deepcopy python protocol (:pr:`2090`)

* Allow user-provided FFT implementations in ``da.fft`` (:pr:`2093`)
+ Bag
+ DataFrame

* Fix to_parquet with empty partitions (:pr:`2020`)

* Optional ``npartitions=\'auto\'`` mode in ``set_index`` (:pr:`2025`)

* Optimize shuffle performance (:pr:`2032`)

* Support efficient repartitioning along time windows like
``repartition(freq=\'12h\')`` (:pr:`2059`)

* Improve speed of categorize (:pr:`2010`)

* Support single-row dataframe arithmetic (:pr:`2085`)

* Automatically avoid shuffle when setting index with a sorted column
(:pr:`2091`)

* Improve handling of integer-na handling in read_csv (:pr:`2098`)
+ Delayed

* Repeated attribute access on delayed objects uses the same key (:pr:`2084`)
+ Core

* Improve naming of nodes in dot visuals to avoid generic ``apply``
(:pr:`2070`)

* Ensure that worker processes have different random seeds (:pr:`2094`)
- Update to version 0.14.0
+ Array

* Fix corner cases with zero shape and misaligned values in ``arange``

* Improve concatenation efficiency (:pr:`1923`)

* Avoid hashing in ``from_array`` if name is provided (:pr:`1972`)
+ Bag

* Repartition can now increase number of partitions (:pr:`1934`)

* Fix bugs in some reductions with empty partitions (:pr:`1939`), (:pr:`1950`),
(:pr:`1953`)
+ DataFrame

* Support non-uniform categoricals (:pr:`1877`), (:pr:`1930`)

* Groupby cumulative reductions (:pr:`1909`)

* DataFrame.loc indexing now supports lists (:pr:`1913`)

* Improve multi-level groupbys (:pr:`1914`)

* Improved HTML and string repr for DataFrames (:pr:`1637`)

* Parquet append (:pr:`1940`)

* Add ``dd.demo.daily_stock`` function for teaching (:pr:`1992`)
+ Delayed

* Add ``traverse=`` keyword to delayed to optionally avoid traversing nested
data structures (:pr:`1899`)

* Support Futures in from_delayed functions (:pr:`1961`)

* Improve serialization of decorated delayed functions (:pr:`1969`)
+ Core

* Improve windows path parsing in corner cases (:pr:`1910`)

* Rename tasks when fusing (:pr:`1919`)

* Add top level ``persist`` function (:pr:`1927`)

* Propagate ``errors=`` keyword in byte handling (:pr:`1954`)

* Dask.compute traverses Python collections (:pr:`1975`)

* Structural sharing between graphs in dask.array and dask.delayed (:pr:`1985`)
- Update to version 0.13.0
+ Array

* Mandatory dtypes on dask.array. All operations maintain dtype information
and UDF functions like map_blocks now require a dtype= keyword if it can not
be inferred. (:pr:`1755`)

* Support arrays without known shapes, such as arises when slicing arrays with
arrays or converting dataframes to arrays (:pr:`1838`)

* Support mutation by setting one array with another (:pr:`1840`)

* Tree reductions for covariance and correlations. (:pr:`1758`)

* Add SerializableLock for better use with distributed scheduling (:pr:`1766`)

* Improved atop support (:pr:`1800`)

* Rechunk optimization (:pr:`1737`), (:pr:`1827`)
+ Bag

* Avoid wrong results when recomputing the same groupby twice (:pr:`1867`)
+ DataFrame

* Add ``map_overlap`` for custom rolling operations (:pr:`1769`)

* Add ``shift`` (:pr:`1773`)

* Add Parquet support (:pr:`1782`) (:pr:`1792`) (:pr:`1810`), (:pr:`1843`),
(:pr:`1859`), (:pr:`1863`)

* Add missing methods combine, abs, autocorr, sem, nsmallest, first, last,
prod, (:pr:`1787`)

* Approximate nunique (:pr:`1807`), (:pr:`1824`)

* Reductions with multiple output partitions (for operations like
drop_duplicates) (:pr:`1808`), (:pr:`1823`) (:pr:`1828`)

* Add delitem and copy to DataFrames, increasing mutation support (:pr:`1858`)
+ Delayed

* Changed behaviour for ``delayed(nout=0)`` and ``delayed(nout=1)``:
``delayed(nout=1)`` does not default to ``out=None`` anymore, and
``delayed(nout=0)`` is also enabled. I.e. functions with return
tuples of length 1 or 0 can be handled correctly. This is especially
handy, if functions with a variable amount of outputs are wrapped by
``delayed``. E.g. a trivial example:
``delayed(lambda
*args: args, nout=len(vals))(
*vals)``
+ Core

* Refactor core byte ingest (:pr:`1768`), (:pr:`1774`)

* Improve import time (:pr:`1833`)
- update to version 0.12.0:

* update changelog (#1757)

* Avoids spurious warning message in concatenate (#1752)

* CLN: cleanup dd.multi (#1728)

* ENH: da.ufuncs now supports DataFrame/Series (#1669)

* Faster array slicing (#1731)

* Avoid calling list on partitions (#1747)

* Fix slicing error with None and ints (#1743)

* Add da.repeat (#1702)

* ENH: add dd.DataFrame.resample (#1741)

* Unify column names in dd.read_csv (#1740)

* replace empty with random in test to avoid nans

* Update diagnostics plots (#1736)

* Allow atop to change chunk shape (#1716)

* ENH: DataFrame.loc now supports 2d indexing (#1726)

* Correct shape when indexing with Ellipsis and None

* ENH: Add DataFrame.pivot_table (#1729)

* CLN: cleanup DataFrame class handling (#1727)

* ENH: Add DataFrame.combine_first (#1725)

* ENH: Add DataFrame all/any (#1724)

* micro-optimize _deps (#1722)

* A few small tweaks to da.Array.astype (#1721)

* BUG: Fixed metadata lookup failure in Accessor (#1706)

* Support auto-rechunking in stack and concatenate (#1717)

* Forward `get` kwarg in df.to_csv (#1715)

* Add rename support for multi-level columns (#1712)

* Update paid support section

* Add `drop` to reset_index (#1711)

* Cull dask.arrays on slicing (#1709)

* Update dd.read_
* functions in docs

* WIP: Feature/dataframe aggregate (implements #1619) (#1678)

* Add da.round (#1708)

* Executor -> Client

* Add support of getitem for multilevel columns (#1697)

* Prepend optimization keywords with name of optimization (#1690)

* Add dd.read_table (#1682)

* Fix dd.pivot_table dtype to be deterministic (#1693)

* da.random with state is consistent across sizes (#1687)

* Remove `raises`, use pytest.raises instead (#1679)

* Remove unnecessary calls to list (#1681)

* Dataframe tree reductions (#1663)

* Add global optimizations to compute (#1675)

* TST: rename dataframe eq to assert_eq (#1674)

* ENH: Add DataFrame/Series.align (#1668)

* CLN: dataframe.io (#1664)

* ENH: Add DataFrame/Series clip_xxx (#1667)

* Clear divisions on single_partitions_merge (#1666)

* ENH: add dd.pivot_table (#1665)

* Typo in `use-cases`? (#1670)

* add distributed follow link doc page

* Dataframe elemwise (#1660)

* Windows file and endline test handling (#1661)

* remove old badges

* Fix #1656: failures when parallel testing (#1657)

* Remove use of multiprocessing.Manager (#1652) (#1653)

* A few fixes for `map_blocks` (#1654)

* Automatically expand chunking in atop (#1644)

* Add AppVeyor configuration (#1648)

* TST: move flake8 to travis script (#1655)

* CLN: Remove unused funcs (#1638)

* Implementing .size and groupby size method (#1627) (#1649)

* Use strides, shape, and offset in memmap tokenize (#1646)

* Validate scalar metadata is scalar (#1642)

* Convert readthedocs links for their .org -> .io migration for
hosted projects (#1639)

* CLN: little cleanup of dd.categorical (#1635)

* Signature of Array.transpose matches numpy (#1632)

* Error nicely when indexing Array with Array (#1629)

* ENH: add DataFrame.get_xtype_counts (#1634)

* PEP8: some fixes (#1633)
- changes from version 0.11.1:

* support uniform index partitions in set_index(sorted) (#1626)

* Groupby works with multiprocessing (#1625)

* Use a nonempty index in _maybe_partial_time_string

* Fix segfault in groupby-var

* Support Pandas 0.19.0

* Deprecations (#1624)

* work-around for ddf.info() failing because of
https://github.com/pydata/pandas/issues/14368 (#1623)

* .str accessor needs to pass thru both args & kwargs (#1621)

* Ensure dtype is provided in additional tests (#1620)

* coerce rounded numbers to int in dask.array.ghost (#1618)

* Use assert_eq everywhere in dask.array tests (#1617)

* Update documentation (#1606)

* Support new_axes= keyword in atop (#1612)

* pass through node_attr and edge_attr in dot_graph (#1614)

* Add swapaxes to dask array (#1611)

* add clip to Array (#1610)

* Add atop(concatenate=False) keyword argument (#1609)

* Better error message on metadata inference failure (#1598)

* ENH/API: Enhanced Categorical Accessor (#1574)

* PEP8: dataframe fix except E127,E402,E501,E731 (#1601)

* ENH: dd.get_dummies for categorical Series (#1602)

* PEP8: some fixes (#1605)

* Fix da.learn tests for scikit-learn release (#1597)

* Suppress warnings in psutil (#1589)

* avoid more timeseries warnings (#1586)

* Support inplace operators in dataframe (#1585)

* Squash warnings in resample (#1583)

* expand imports for dask.distributed (#1580)

* Add indicator keyword to dd.merge (#1575)

* Error loudly if `nrows` used in read_csv (#1576)

* Add versioneer (#1569)

* Strengthen statement about gitter for developers in docs

* Raise IndexError on out of bounds slice. (#1579)

* ENH: Support Series in read_hdf (#1577)

* COMPAT/API: DataFrame.categorize missing values (#1578)

* Add `pipe` method to dask.dataframe (#1567)

* Sample from `read_bytes` ends on a delimiter (#1571)

* Remove mention of bag join in docs (#1568)

* Tokenize mmap works without filename (#1570)

* String accessor works with indexes (#1561)

* corrected links to documentation from Examples (#1557)

* Use conda-forge channel in travis (#1559)

* add s3fs to travis.yml (#1558)

* ENH: DataFrame.select_dtypes (#1556)

* Improve slicing performance (#1539)

* Check meta in `__init__` of _Frame

* Fix metadata in Series.getitem

* A few changes to `dask.delayed` (#1542)

* Fixed read_hdf example (#1544)

* add section on distributed computing with link to toc

* Fix spelling (#1535)

* Only fuse simple indexing with getarray backends (#1529)

* Deemphasize graphs in docs (#1531)

* Avoid pickle when tokenizing __main__ functions (#1527)

* Add changelog doc going up to dask 0.6.1 (2015-07-23). (#1526)

* update dataframe docs

* update index

* Update to highlight the use of glob based file naming option for
df exports (#1525)

* Add custom docstring to dd.to_csv, mentioning that one file per
partition is written (#1524)

* Run slow tests in Travis for all Python versions, even if coverage
check is disabled. (#1523)

* Unify example doc pages into one (#1520)

* Remove lambda/inner functions in dask.dataframe (#1516)

* Add documentation for dataframe metadata (#1514)

* \"dd.map_partitions\" works with scalar outputs (#1515)

* meta_nonempty returns types of correct size (#1513)

* add memory use note to tsqr docstring

* Fix slow consistent keyname test (#1510)

* Chunks check (#1504)

* Fix last \'line\' in sample; prevents open quotes. (#1495)

* Create new threadpool when operating from thread (#1487)

* Add finalize- prefix to dask.delayed collections

* Move key-split from distributed to dask

* State that delayed values should be lists in bag.from_delayed
(#1490)

* Use lists in db.from_sequence (#1491)

* Implement user defined aggregations (#1483)

* Field access works with non-scalar fields (#1484)
- Update to 0.11.0

* DataFrames now enforce knowing full metadata (columns, dtypes)
everywhere. Previously we would operate in an ambiguous state
when functions lost dtype information (such as apply). Now all
dataframes always know their dtypes and raise errors asking for
information if they are unable to infer (which they usually
can). Some internal attributes like _pd and _pd_nonempty have
been moved.

* The internals of the distributed scheduler have been refactored
to transition tasks between explicit states. This improves
resilience, reasoning about scheduling, plugin operation, and
logging. It also makes the scheduler code easier to understand
for newcomers.

* Breaking Changes
+ The distributed.s3 and distributed.hdfs namespaces are gone.
Use protocols in normal methods like read_text(\'s3://...\'
instead.
+ Dask.array.reshape now errs in some cases where previously
it would have create a very large number of tasks
- update to version 0.10.2:

* raise informative error on merge(on=frame)

* Fix crash with -OO Python command line (#1388)

* [WIP] Read hdf partitioned (#1407)

* Add dask.array.digitize. (#1409)

* Adding documentation to create dask DataFrame from HDF5 (#1405)

* Unify shuffle algorithms (#1404)

* dd.read_hdf: clear errors on exceeding row numbers (#1406)

* Rename `get_division` to `get_partition`

* Add nice error messages on import failures

* Use task-based shuffle in hash_joins (#1383)

* Fixed #1381: Reimplemented DataFrame.repartition(npartition=N) so
it doesn\'t require indexing and just coalesce existing partitions
without shuffling/balancing (#1396)

* Import visualize from dask.diagnostics in docs

* Backport `equal_nans` to older version of numpy

* Improve checks for dtype and shape in dask.array

* Progess bar process should be deamon

* LZMA may not be available in python 3 (#1391)

* dd.to_hdf: multiple files multiprocessing avoid locks (#1384)

* dir works with numeric column names

* Dataframe groupby works with numeric column names

* Use fsync when appending to partd

* Fix pickling issue in dataframe to_bag

* Add documentation for dask.dataframe.to_hdf

* Fixed a copy-paste typo in DataFrame.map_partitions docstring

* Fix \'visualize\' import location in diagnostics documentation
(#1376)

* update cheat sheet (#1371)
- update to version 0.10.1:

* `inline` no longer removes keys (#1356)

* avoid c: in infer_storage_options (#1369)

* Protect reductions against empty partitions (#1361)

* Add doc examples for dask.array.histogram. (#1363)

* Fix typo in pip install requirements path (#1364)

* avoid unnecessary dependencies between save tasks in
dataframe.to_hdf (#1293)

* remove xfail mark for blosc missing const

* Add `anon=True` for read from s3 test

* `subs` doesn\'t needlessly compare keys and values

* Use pytest.importorskip instead of try/except/return pattern

* Fixes for bokeh 0.12.0

* Multiprocess scheduler handles unpickling errors

* arra.random with array-like parameters (#1327)

* Fixes issue #1337 (#1338)

* Remove dask runtime dependence on mock 2.7 backport.

* Load known but external protocols automatically (#1325)

* Add center argument to Series/DataFrame.rolling (#1280)

* Add Bag.random_sample method. (#1332)

* Correct docs install command and add missing required packages
(#1333)

* Mark the 4 slowest tests as slow to get a faster suite by
default. (#1334)

* Travis: Install mock package in Python 2.7.

* Automatic blocksize for read_csv based on available memory and
number of cores.

* Replace \"Matthew Rocklin\" with \"Dask Development Team\" (#1329)

* Support column assignment in DataFrame (#1322)

* Few travis fixes, pandas version >= 0.18.0 (#1314)

* Don\'t run hdf test if pytables package is not present. (#1323)

* Add delayed.compute to api docs.

* Support datetimes in DataFrame._build_pd (#1319)

* Test setting the index with datetime with timezones, which is a
pandas-defined dtype

* (#1315)

* Add s3fs to requirements (#1316)

* Pass dtype information through in Series.astype (#1320)

* Add draft of development guidelines (#1305)

* Skip tests needing optional package when it\'s not present. (#1318)

* DOC: Document DataFrame.categorize

* make dd.to_csv support writing to multiple csv files (#1303)

* quantiles for repartitioning (#1261)

* DOC: Minimal doc for get_sync (#1312)

* Pass through storage_options in db.read_text (#1304)

* Fixes #1237: correctly propagate storage_options through read_
*
APIs and use urlsplit to automatically get remote connection
settings (#1269)

* TST: Travis build matrix to specify numpy/pandas ver (#1300)

* amend doc string to Bag.to_textfiles

* Return dask.Delayed when saving files with compute = false (#1286)

* Support empty or small dataframes in from_pandas (#1290)

* Add validation and tests for order breaking name_function (#1275)

* ENH: dataframe now supports partial string selection (#1278)

* Fix typo in spark-dask docs

* added note and verbose exception about CSV parsing errors (#1287)
- update to version 0.10.0:

* Add parametrization to merge tests

* Add more challenging types to nonempty_sample_df test

* Windows fixes

* TST: Fix coveralls badge (#1276)

* Sort index on shuffle (#1274)

* Update specification docs to reflect new spec.

* Add groupby docs (#1273)

* Update spark docs

* Rolling class receives normal arguments (unchecked other than
pandas call), stores at

* Reduce communication in rolling operations #1242 (#1270)

* Fix Shuffle (#1255)

* Work on earlier versions of Pandas

* Handle additional Pandas types

* Use non-empty fake dataframe in merge operations

* Add failing test for merge case

* Add utility function to create sample dataframe

* update release procedure

* amend doc string to Bag.to_textfiles (#1258)

* Drop Python 2.6 support (#1264)

* Clean DataFrame naming conventions (#1263)

* Fix some bugs in the rolling implementation.

* Fix core.get to use new spec

* Make graph definition recursive

* Handle empty partitions in dask.bag.to_textfiles

* test index.min/max

* Add regression test for non-ndarray slicing

* Standardize dataframe keynames

* bump csv sample size to 256k (#1253)

* Switch tests to utils.tmpdir (#1251)

* Fix dot_graph filename split bug

* Correct documentation to reflect argument existing now.

* Allow non-zero axis for .rolling (for application over columns)

* Fix scheduler behavior for top-level lists

* Various spelling mistakes in docstrings, comments, exception
messages, and a filename

* Fix typo. (#1247)

* Fix tokenize in dask.delayed

* Remove unused imports, pep8 fixes

* Fix bug in slicing optimization

* Add Task Shuffle (#1186)

* Add bytes API (#1224)

* Add dask_key_name to docs, fix bug in methods

* Allow formatting in dask.dataframe.to_hdf path and key parameters

* Match pandas\' exceptions a bit closer in the rolling API. Also,
correct computation f

* Add tests to package (#1231)

* Document visualize method (#1234)

* Skip new rolling API\'s tests if the pandas we have is too old.

* Improve df_or_series.rolling(...) implementation.

* Remove `iloc` property on `dask.dataframe`

* Support for the new pandas rolling API.

* test delayed names are different under kwargs

* Add Hussain Sultan to AUTHORS

* Add `optimize_graph` keyword to multiprocessing get

* Add `optimize_graph` keyword to `compute`

* Add dd.info() (#1213)

* Cleanup base tests

* Add groupby documentation stub

* pngmath is deprecated in sphinx 1.4

* A few docfixes

* Extract dtype in dd.from_bcolz

* Throw NotImplementedError if old toolz.accumulate

* Add isnull and notnull for dataframe

* Add dask.bag.accumulate

* Fix categorical partitioning

* create single lock for glob read_hdf

* Fix failing from_url doctest

* Add missing api to bag docs

* Add Skipper Seabold to AUTHORS.

* Don\'t use mutable default argument

* Fix typo

* Ensure to_task_dasks always returns a task

* Fix dir for dataframe objects

* Infer metadata in dd.from_delayed

* Fix some closure issues in dask.dataframe

* Add storage_options keyword to read_csv

* Define finalize function for dask.dataframe.Scalar

* py26 compatibility

* add stacked logos to docs

* test from-array names

* rename from_array tasks

* add atop to array docs

* Add motivation and example to delayed docs

* splat out delayed values in compute docs

* Fix optimize docs

* add html page with logos

* add dask logo to documentation images

* Few pep8 cleanups to dask.dataframe.groupby

* Groupby aggregate works with list of columns

* Use different names for input and output in from_array

* Don\'t enforce same column names

* don\'t write header for first block in csv

* Add var and std to DataFrame groupby (#1159)

* Move conda recipe to conda-forge (#1162)

* Use function names in map_blocks and elemwise (#1163)

* add hyphen to delayed name (#1161)

* Avoid shuffles when merging with Pandas objects (#1154)

* Add DataFrame.eval

* Ensure future imports

* Add db.Bag.unzip

* Guard against shape attributes that are not sequences

* Add dask.array.multinomial
- update to version 0.9.0:

* No upstream changelog
- update to version 0.8.2:

* No upstream changelog
- update to version 0.8.1:

* No upstream changelog
- update to version 0.8.0:

* No upstream changelog
- update to version 0.7.5:

* No upstream changelog
- update to version 0.7.5:

* No upstream changelog
- update to version 0.7.0:

* No upstream changelog
- update to version 0.6.1:

* No upstream changelog

Tue Jul 14 14:00:00 2015 toddrme2178AATTgmail.com
- Update to 0.6.0

* No upstream changelog

Tue May 19 14:00:00 2015 toddrme2178AATTgmail.com
- Update to 0.5.0

* No upstream changelog

Thu Apr 9 14:00:00 2015 toddrme2178AATTgmail.com
- Initial version


 
ICM