-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Concurrency problem when using the subtype operator <:< #10766
Comments
Note that in general when filing a bug report, we ask that you try it on a current Scala version before reporting (at present 2.12.4 or 2.13.0-M3; and the latest 2.11.x release was 2.11.12). |
I was able to reproduce the problem with Scala 2.11.12, 2.12.4, and 2.13.0-M3 with Java 1.8.0_151 on Windows 10. |
That moment when you realize |
SethTisue both the bug #6240 and https://docs.scala-lang.org/overviews/reflection/thread-safety.html suggest that problem was fixed as of 2.11. Based on profiling, if it is really non-threadsafe code, it spends a surprising amount of time holding locks. |
You’re right about 6240. But I also feel like there’s further history on this, maybe on another ticket...? |
re: holding locks, /~https://github.com/scala/scala/pull/3386/files gives some idea of the contortions involved trying to make this stuff threadsafe |
Is anyone working on this? If not, any recommendations on fixing it? This seems like a pretty major issue, and it currently affects Apache Spark, making certain methods not thread safe. |
@SethTisue I know the Scala team is trying not to do any more work on Scala 2.11, but it will be a long time before all Scala 2.11-based Spark disappears from The Internets, so I hope the Scala team will consider back-porting a fix to 2.11 and 2.12. |
scala.reflect is experimental. Making it thread safe without killing performance is nearly impossible. I was against even trying. The workaround is to synchronize access by the user, or not use it at all. 2.11 is EOL. When you consider all this together, I don’t see how we can justify cutting another 2.11 release |
(just my own thoughts, not a team statement:) Quite apart from 2.11's EOL-ness, tinkering with
I was going to suggest this as well. I'm not sure if the wording is clear. Just in case, I'll rephrase: Spark could, we suggest, add locks to their own code at the
I'm curious to what extent Spark really needs Perhaps |
(I don't know much about Spark, but) An example usage looks like /~https://github.com/apache/spark/blob/v2.4.0/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/ScalaReflection.scala#L61-L69, which uses the Scala type information to grab Spark SQL DataType. Here's another. /~https://github.com/apache/spark/blob/v2.4.0/sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala#L262-L271. Generally speaking, it seems like "take this datatype written in Scala, and serialize it in some other Spark datatype"? To support things like Array, what other facility would you recommend? |
I wasn't expecting a backport of a fix to 2.11 or 2.12, but IMO this behavior subverts expectations, and we should try to make scala reflection better for future versions. I'm adding a workaround in Spark: apache/spark#24085. Still, I think this should be fixed. There is ALWAYS a way to achieve near-identical performance in a thread-safe way (for all cases that were previously correct). For example, even doing the dumb solution of synchronizing on calls to <:< provides imperceptible slowdown in all currently correct (single-threaded) cases. I've done my own experiment, and you can verify this experimentally as well. |
It's unfortunate that Spark is using runtime reflection to generate the serialization code. The long term solution would be to use typeclasses + compile-time typeclass derivation, but it might require some re-archirecturing. Flink doesn't have this problem, because it's generating the serialization code at compile time with a macro: /~https://github.com/apache/flink/blob/master/flink-scala/src/main/scala/org/apache/flink/api/scala/codegen/TypeInformationGen.scala This is still not as ideal as using typeclasses, but much better than runtime reflection. I guess another question to ask would be why this code needs to be generated in parallel? Would it not be good enough to generate the serialization code beforehand and then initiate the job? Or am I missing something. |
Spark is intended to be fully thread safe, not only within a Spark job, but also across parallel jobs in the same application: https://spark.apache.org/docs/latest/job-scheduling.html#scheduling-within-an-application
One use case where this becomes important is for reading multiple data sources at once. Here the bottleneck is often waiting on each data source to return results, so it is best for the user to start multiple threads, one for each data source, and join all of them once each one is materialized into an RDD/DataFrame/Dataset or its side effect is complete. I agree about type classes. It would be a better solution, but tricky to switch to now. |
The worst part is that Spark actually has an |
Ah yes, I forgot about To be fair it's not easy to write typeclass derivation that composes well in Scala, just take for example |
Happy to see the workaround is doable! Just to be clear, we do not intend to provide (full) thread-safety for scala.reflect -- I don't think it's feasible. Closing this ticket to reflect that. |
@adriaanm @SethTisue I don't understand.
|
Apparently our communication around scala.reflect being experimental hasn't been very effective. It's a layer built on top of the compiler, and you'll just have to take my word for it (or look around in the PR queue) that it's not an easy thing to make thread safe. If it was as easy as you suggest in 1., we'd have done it.
|
Sorry if I misunderstand or missed something. What's wrong with 1. way? If there's a problem Spark side shouldn't take this approach too. |
Make ScalaReflection subtype checking thread safe by adding a lock. There is a thread safety bug in the <:< operator in all versions of scala (scala/bug#10766). Existing tests and a new one for the new subtype checking function. Closes apache#24085 from mwlon/SPARK-26555. Authored-by: mwlon <mloncaric@hmc.edu> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
…thread safe This is a Spark 2.4.x backport of #24085. Original description follows below: ## What changes were proposed in this pull request? Make ScalaReflection subtype checking thread safe by adding a lock. There is a thread safety bug in the <:< operator in all versions of scala (scala/bug#10766). ## How was this patch tested? Existing tests and a new one for the new subtype checking function. Closes #24913 from JoshRosen/joshrosen/SPARK-26555-branch-2.4-backport. Authored-by: mwlon <mloncaric@hmc.edu> Signed-off-by: Josh Rosen <rosenville@gmail.com>
* Synchronize =:= too just in case, due to strange failures... scala/bug#10766 * CompilerTest linux permission ro-Z * Revert "CompilerTest linux permission ro-Z" This reverts commit 9744a4c.
…thread safe This is a Spark 2.4.x backport of apache#24085. Original description follows below: ## What changes were proposed in this pull request? Make ScalaReflection subtype checking thread safe by adding a lock. There is a thread safety bug in the <:< operator in all versions of scala (scala/bug#10766). ## How was this patch tested? Existing tests and a new one for the new subtype checking function. Closes apache#24913 from JoshRosen/joshrosen/SPARK-26555-branch-2.4-backport. Authored-by: mwlon <mloncaric@hmc.edu> Signed-off-by: Josh Rosen <rosenville@gmail.com>
…thread safe This is a Spark 2.4.x backport of apache#24085. Original description follows below: ## What changes were proposed in this pull request? Make ScalaReflection subtype checking thread safe by adding a lock. There is a thread safety bug in the <:< operator in all versions of scala (scala/bug#10766). ## How was this patch tested? Existing tests and a new one for the new subtype checking function. Closes apache#24913 from JoshRosen/joshrosen/SPARK-26555-branch-2.4-backport. Authored-by: mwlon <mloncaric@hmc.edu> Signed-off-by: Josh Rosen <rosenville@gmail.com>
…thread safe This is a Spark 2.4.x backport of apache#24085. Original description follows below: ## What changes were proposed in this pull request? Make ScalaReflection subtype checking thread safe by adding a lock. There is a thread safety bug in the <:< operator in all versions of scala (scala/bug#10766). ## How was this patch tested? Existing tests and a new one for the new subtype checking function. Closes apache#24913 from JoshRosen/joshrosen/SPARK-26555-branch-2.4-backport. Authored-by: mwlon <mloncaric@hmc.edu> Signed-off-by: Josh Rosen <rosenville@gmail.com>
## What changes were proposed in this pull request? Make ScalaReflection subtype checking thread safe by adding a lock. There is a thread safety bug in the <:< operator in all versions of scala (scala/bug#10766). ## How was this patch tested? Existing tests and a new one for the new subtype checking function. Closes apache#24085 from mwlon/SPARK-26555. Authored-by: mwlon <mloncaric@hmc.edu> Signed-off-by: Wenchen Fan <wenchen@databricks.com> RB=1924052 BUG=LIHADOOP-50758 G=superfriends-reviewers R=fli,latang,mshen,zolin,yezhou A=mshen
Using Scala 2.11.6 with Java 1.8.0_111 on Ubuntu 16.04.2 LTS.
I have the following code
I get this output when I run the above in a loop:
It's expected that
someStringProperty
be a subtype ofsomeProperty
, andsomeStringProperty <:< someProperty
returns true most of the time. But when used in multiple threads (on a cold JVM) it appears to sometimes returns false.The problem seems to go away if I add a synchronized keyword around the subtype check.
The text was updated successfully, but these errors were encountered: