Changelog for
spark-2.2.3-lp150.9.1.noarch.rpm :
* Wed Oct 19 2022 Darragh O\'Reilly
- Add CVE-2022-33891.patch (bsc#1204326, CVE-2022-33891) [SPARK-38992][CORE] Avoid using bash -c in ShellBasedGroupsMappingProvider
* Fri Sep 25 2020 Jacek Tomasiak - Add _constraints to prevent build from running out of disk space
* Thu Apr 04 2019 jodavisAATTsuse.com- Add fix-spark-home-and-conf.patch The patch fixes SPARK_HOME and SPARK_CONF_DIR in the different bin/spark-
* scripts to call find-spark-home.
* Tue Mar 12 2019 jodavisAATTsuse.com- Add metrics-core-2.2.0.jar and kafka_2.10-0.8.2.1.jar to dist/jars- Note these version changes must match up with the spark-kit package, and that upstream OpenStack Monasca Transform is using Scala 2.10 and Kafka 0.8.
* Mon Mar 11 2019 jodavisAATTsuse.com- Changed spark-streaming-kafka to 0.8-2.10 and changed copied file name to include versions
* Thu Mar 07 2019 jodavisAATTsuse.com- Changed Scala version in .jar filename to 2.10 to match build.sh
* Thu Mar 07 2019 Johannes Grassler - Modified build.sh to build against Scala 2.10
* Mon Feb 18 2019 Johannes Grassler - Build with -Phive and -Phive-thriftserver- Replace upstream fix-spark-home by simplified one of our own- Fix path for jars in service files
* Fri Feb 15 2019 Johannes Grassler - Update to version 2.2.3
* [SPARK-26327] - Metrics in FileSourceScanExec not update correctly while relation.partitionSchema is set
* [SPARK-21402] - Fix java array of structs deserialization
* [SPARK-22951] - count() after dropDuplicates() on emptyDataFrame returns incorrect value
* [SPARK-23207] - Shuffle+Repartition on an DataFrame could lead to incorrect
* answers
* [SPARK-23243] - Shuffle+Repartition on an RDD could lead to incorrect answers
* [SPARK-24603] - Typo in comments
* [SPARK-24677] - TaskSetManager not updating successfulTaskDurations for old stage attempts
* [SPARK-24809] - Serializing LongHashedRelation in executor may result in data error
* [SPARK-24813] - HiveExternalCatalogVersionsSuite still flaky; fall back to Apache archive
* [SPARK-24927] - The hadoop-provided profile doesn\'t play well with Snappy-compressed Parquet files
* [SPARK-24948] - SHS filters wrongly some applications due to permission check
* [SPARK-24950] - scala DateTimeUtilsSuite daysToMillis and millisToDays fails w/java 8 181-b13
* [SPARK-24957] - Decimal arithmetic can lead to wrong values using codegen
* [SPARK-25081] - Nested spill in ShuffleExternalSorter may access a released memory page
* [SPARK-25114] - RecordBinaryComparator may return wrong result when subtraction between two words is divisible by Integer.MAX_VALUE
* [SPARK-25144] - distinct on Dataset leads to exception due to Managed memory leak detected
* [SPARK-25164] - Parquet reader builds entire list of columns once for each column
* [SPARK-25402] - Null handling in BooleanSimplification
* [SPARK-25568] - Continue to update the remaining accumulators when failing to update one accumulator
* [SPARK-25591] - PySpark Accumulators with multiple PythonUDFs
* [SPARK-25714] - Null Handling in the Optimizer rule BooleanSimplification
* [SPARK-25726] - Flaky test: SaveIntoDataSourceCommandSuite.`simpleString is redacted`
* [SPARK-25797] - Views created via 2.1 cannot be read via 2.2+
* [SPARK-25854] - mvn helper script always exits w/1, causing mvn builds to fail
* [SPARK-26233] - Incorrect decimal value with java beans and first/last/max... functions
* [SPARK-26537] - update the release scripts to point to gitbox
* [SPARK-26545] - Fix typo in EqualNullSafe\'s truth table comment
* [SPARK-26553] - NameError: global name \'_exception_message\' is not defined
* [SPARK-26802] - CVE-2018-11760: Apache Spark local privilege escalation vulnerability
* [SPARK-26118] - Make Jetty\'s requestHeaderSize configurable in Spark
* [SPARK-20715] - MapStatuses shouldn\'t be redundantly stored in both ShuffleMapStage and MapOutputTracker
* [SPARK-25253] - Refactor pyspark connection & authentication
* [SPARK-25576] - Fix lint failure in 2.2
* [SPARK-24564] - Add test suite for RecordBinaryComparator- Add _service- Drop fix-spark-home-and-conf.patch (no longer needed since all scripts use find-spark-home now)- Adjust build.sh to account for automatic Hadoop version and new Kafka version- Address various packaging deficiencies (bsc#1081531):
* Remove configuration templates from /usr/share/spark
* Fix static versioning
* Get rid of wildcards in %files section
* Improve Summary
* Sat Feb 09 2019 ashwin.agateAATTsuse.com- Added Restart and RestartSec to restart spark master and spark worker (bsc#1091479)
* Wed Mar 21 2018 ashwin.agateAATTsuse.com- Remove drizzle jdbc jar (bsc#1084084)
* Thu Mar 08 2018 tbechtoldAATTsuse.com- Add fix-spark-home-and-conf.patch The patch fixes SPARK_HOME and SPARK_CONF_DIR in the different bin/spark-
* scripts.
* Thu Mar 08 2018 ashwin.agateAATTsuse.com- Added SPARK_DAEMON_JAVA_OPTS to set java heap size settings in spark-worker and spark-master service files.
* Tue Mar 06 2018 tbechtoldAATTsuse.com- Install /etc/spark/spark-env . This script is automatically read during startup and can be used for custom configuration- Install /etc/spark/spark-defaults.conf- Create /run/spark dir via systemd tmpfiles- Add missing Requires/BuildRequires for systemd- Drop openstack-suse-macros BuildRequires and use the typical way to create a spark user/group and homedir- Add useful description
* Fri Feb 23 2018 dmuellerAATTsuse.com- cleanup spec file
* Fri Feb 23 2018 jodavisAATTsuse.com- Fix spark-worker.service to use port 7077, avoiding conflict (bsc#1081275)
* Mon Feb 19 2018 tbechtoldAATTsuse.com- Fix ExecStartPre bash syntax in spark-worker.service (bsc#1081275)
* Mon Jul 24 2017 jbrownellAATTsuse.com- Initial package