Changelog for
spark-2.2.3-3.1.noarch.rpm :
Mon Feb 18 13:00:00 2019 Johannes Grassler
- Build with -Phive and -Phive-thriftserver
- Replace upstream fix-spark-home by simplified one of our own
- Fix path for jars in service files
Fri Feb 15 13:00:00 2019 Johannes Grassler
- Update to version 2.2.3
* [SPARK-26327] - Metrics in FileSourceScanExec not update correctly while
relation.partitionSchema is set
* [SPARK-21402] - Fix java array of structs deserialization
* [SPARK-22951] - count() after dropDuplicates() on emptyDataFrame returns
incorrect value
* [SPARK-23207] - Shuffle+Repartition on an DataFrame could lead to incorrect
* answers
* [SPARK-23243] - Shuffle+Repartition on an RDD could lead to incorrect
answers
* [SPARK-24603] - Typo in comments
* [SPARK-24677] - TaskSetManager not updating successfulTaskDurations for old
stage attempts
* [SPARK-24809] - Serializing LongHashedRelation in executor may result in
data error
* [SPARK-24813] - HiveExternalCatalogVersionsSuite still flaky; fall back to
Apache archive
* [SPARK-24927] - The hadoop-provided profile doesn\'t play well with
Snappy-compressed Parquet files
* [SPARK-24948] - SHS filters wrongly some applications due to permission
check
* [SPARK-24950] - scala DateTimeUtilsSuite daysToMillis and millisToDays
fails w/java 8 181-b13
* [SPARK-24957] - Decimal arithmetic can lead to wrong values using codegen
* [SPARK-25081] - Nested spill in ShuffleExternalSorter may access a released
memory page
* [SPARK-25114] - RecordBinaryComparator may return wrong result when
subtraction between two words is divisible by
Integer.MAX_VALUE
* [SPARK-25144] - distinct on Dataset leads to exception due to Managed
memory leak detected
* [SPARK-25164] - Parquet reader builds entire list of columns once for each
column
* [SPARK-25402] - Null handling in BooleanSimplification
* [SPARK-25568] - Continue to update the remaining accumulators when failing
to update one accumulator
* [SPARK-25591] - PySpark Accumulators with multiple PythonUDFs
* [SPARK-25714] - Null Handling in the Optimizer rule BooleanSimplification
* [SPARK-25726] - Flaky test: SaveIntoDataSourceCommandSuite.`simpleString is
redacted`
* [SPARK-25797] - Views created via 2.1 cannot be read via 2.2+
* [SPARK-25854] - mvn helper script always exits w/1, causing mvn builds to
fail
* [SPARK-26233] - Incorrect decimal value with java beans and
first/last/max... functions
* [SPARK-26537] - update the release scripts to point to gitbox
* [SPARK-26545] - Fix typo in EqualNullSafe\'s truth table comment
* [SPARK-26553] - NameError: global name \'_exception_message\' is not defined
* [SPARK-26802] - CVE-2018-11760: Apache Spark local privilege escalation
vulnerability
* [SPARK-26118] - Make Jetty\'s requestHeaderSize configurable in Spark
* [SPARK-20715] - MapStatuses shouldn\'t be redundantly stored in both
ShuffleMapStage and MapOutputTracker
* [SPARK-25253] - Refactor pyspark connection & authentication
* [SPARK-25576] - Fix lint failure in 2.2
* [SPARK-24564] - Add test suite for RecordBinaryComparator
- Add _service
- Drop fix-spark-home-and-conf.patch (no longer needed since all scripts use
find-spark-home now)
- Adjust build.sh to account for automatic Hadoop version and
new Kafka version
- Address various packaging deficiencies (bsc#1081531):
* Remove configuration templates from /usr/share/spark
* Fix static versioning
* Get rid of wildcards in %files section
* Improve Summary
Sat Feb 9 13:00:00 2019 ashwin.agateAATTsuse.com
- Added Restart and RestartSec to restart
spark master and spark worker (bsc#1091479)
Wed Mar 21 13:00:00 2018 ashwin.agateAATTsuse.com
- Remove drizzle jdbc jar (bsc#1084084)
Thu Mar 8 13:00:00 2018 tbechtoldAATTsuse.com
- Add fix-spark-home-and-conf.patch
The patch fixes SPARK_HOME and SPARK_CONF_DIR in the different
bin/spark-
* scripts.
Thu Mar 8 13:00:00 2018 ashwin.agateAATTsuse.com
- Added SPARK_DAEMON_JAVA_OPTS to set java heap size
settings in spark-worker and spark-master service
files.
Tue Mar 6 13:00:00 2018 tbechtoldAATTsuse.com
- Install /etc/spark/spark-env . This script is automatically
read during startup and can be used for custom configuration
- Install /etc/spark/spark-defaults.conf
- Create /run/spark dir via systemd tmpfiles
- Add missing Requires/BuildRequires for systemd
- Drop openstack-suse-macros BuildRequires and use the typical
way to create a spark user/group and homedir
- Add useful description
Fri Feb 23 13:00:00 2018 dmuellerAATTsuse.com
- cleanup spec file
Fri Feb 23 13:00:00 2018 jodavisAATTsuse.com
- Fix spark-worker.service to use port 7077, avoiding conflict (bsc#1081275)
Mon Feb 19 13:00:00 2018 tbechtoldAATTsuse.com
- Fix ExecStartPre bash syntax in spark-worker.service (bsc#1081275)
Mon Jul 24 14:00:00 2017 jbrownellAATTsuse.com
- Initial package