aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorLuck <git@lucko.me>2018-05-28 15:35:53 +0100
committerLuck <git@lucko.me>2018-05-28 15:35:53 +0100
commitc29e96eb5d7981bbb1d9b54e92cbc9f22319efb8 (patch)
tree89e39cd6dfa854f48ba30386838c807578820b79
parent70b88264da4ff67dfd7b887e9550c9934f2414c7 (diff)
downloadspark-c29e96eb5d7981bbb1d9b54e92cbc9f22319efb8.tar.gz
spark-c29e96eb5d7981bbb1d9b54e92cbc9f22319efb8.tar.bz2
spark-c29e96eb5d7981bbb1d9b54e92cbc9f22319efb8.zip
Handle node insertion in separate worker threads, to reduce the work load on the sampling thread
and some other misc changes
-rw-r--r--README.md60
-rw-r--r--common/src/main/java/com/sk89q/warmroast/Sampler.java84
-rw-r--r--common/src/main/java/me/lucko/spark/common/CommandHandler.java37
-rw-r--r--common/src/main/java/me/lucko/spark/common/SamplerBuilder.java9
4 files changed, 131 insertions, 59 deletions
diff --git a/README.md b/README.md
index c60fe0b..a980524 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
-# spark
+# :zap: spark
Spark is a CPU profiling plugin based on sk89q's [WarmRoast profiler](https://github.com/sk89q/WarmRoast).
-### What does it do?
+## What does it do?
Effectively, it monitors the activity of the server, and records statistical data about which actions take up the most processing time. These statistics can then be used to diagnose potential performance issues with certain parts of the server or specific plugins.
@@ -9,53 +9,61 @@ Once the data has been recorded, a "call graph" can be formed and displayed in a
spark will not fix "lag" - it is a tool to help diagnose the cause of poor performance.
-### About
+## About
-#### WarmRoast features
+### WarmRoast features
These features are carried over from the upstream "WarmRoast" project.
-* Adjustable sampling frequency.
-* Web-based — profiling is performed on the server, but results are viewed in your browser.
-* Collapse and expand nodes to see details.
-* Easily view CPU usage per method at a glance.
-* Hover to highlight all child methods as a group.
+* The viewer is entirely web-based— no specialist software is required to view the output, just a web browser!
+* Output is arranged as a stack of nested method calls, making it easy to interpret the output
+* Nodes can be expanded and collapsed to reveal timing details and further child nodes.
+* The overall CPU usage and contribution of a particular method can be seen at a glance.
* See the percentage of CPU time for each method relative to its parent methods.
+* Sampling frequency can be adjusted.
+* Virtually no overheads or side effects on the target program (the server)
-#### spark features
+### spark features
WarmRoast is an amazing tool for server admins, but it has a few flaws.
* It is not accessible to some people, because in order to use it, you need to have direct SSH (or equivalent) access to the server. (not possible on shared hosts)
-* It can be somewhat clunky to setup and start - firstly, you need to connect to the machine of the server you want to profile. Then, you need to remember the PID of the server, or identify it in a list of running VM display names (not easy when multiple servers are running!) - then allow the profiler to run for a bit, before navigating to a temporary web server hosted by the process.
+* It can be somewhat clunky to setup and start - firstly, you need to connect to the machine of the server you want to profile. Then, you need to remember the PID of the server, or identify it in a list of running VM display names (not easy when multiple servers are running!) - then allow the profiler to run for a bit, before navigating to a temporary web server hosted by the application.
* It's not easy to share profiling data with other developers or admins.
+* You need to have the Java Development Kit installed on your machine.
-I've attempted to address these flaws. With spark, you can:
+I've attempted to address these flaws in spark.
-* Easily setup profiling operations using commands in the console or in-game, without having to have direct access to the server machine
-* Profiling data is uploaded to a "pastebin"-esque site to be viewed - a temporary web server is not needed, and you can easily share your analysis with others!
+* Profiling is managed entirely using in-game or console commands. You don't need to have direct access to the server machine - just install the plugin as you would normally.
+* Data is uploaded to a "pastebin"-esque site to be viewed - a temporary web server is not needed, and you can easily share your analysis with others!
+* It is not necessary to install any special Java agents or provide a path to the Java Development Kit
-#### How does it work?
+Other benefits of spark compared with other profilers:
+* MCP (Mod Coder Pack) deobfuscation mappings can be applied to method names directly in the viewer
+ * This works for both partially deobfuscated Bukkit mappings, as well as for Sponge/Forge (Searge) mappings
+* No specialist software is required to view the output, just a web browser.
+
+### How does it work?
The spark (WarmRoast) profiler operates using a technique known as [sampling](https://en.wikipedia.org/wiki/Profiling_(computer_programming)#Statistical_profilers). A sampling profiler works by probing the target programs call stack at regular intervals in order to determine how frequently certain actions are being performed. In practice, sampling profilers can often provide a more accurate picture of the target program's execution than other approaches, as they are not as intrusive to the target program, and thus don't have as many side effects.
Sampling profiles are typically less numerically accurate and specific than other profiling methods (e.g. instrumentation), but allow the target program to run at near full speed.
-The resultant data is not exact, but a statistical approximation. The accuracy of the output improves as the sampler runs for longer durations, or as the sampling interval is reduced.
+The resultant data is not exact, but a statistical approximation. The accuracy of the output improves as the sampler runs for longer durations, or as the sampling interval is made more frequent.
-#### spark vs "Minecraft Timings"
+### spark vs "Minecraft Timings"
-Aikar's [timings](https://github.com/aikar/timings) system is similar to spark/WarmRoast, in the sense that it also analyses the CPU activity of the server.
+Aikar's [timings](https://github.com/aikar/timings) system (built into Spigot and Sponge) is similar to spark/WarmRoast, in the sense that it also analyses the CPU activity of the server.
-The biggest drawback of timings is that each area of analysis has to be manually defined.
+timings will generally be slightly more accurate than spark, but is (arguably?!) less useful, as each area of analysis has to be manually defined.
-For example, timings might identify that a certain listener in pluginx is taking up a lot of CPU time processing the PlayerMoveEvent, but it won't tell you which part of the processing is intensive. spark/WarmRoast on the other hand *will* show this information.
+For example, timings might identify that a certain listener in plugin x is taking up a lot of CPU time processing the PlayerMoveEvent, but it won't tell you which part of the processing is slow. spark/WarmRoast on the other hand *will* show this information, right down to the name of the method call causing the bad performance.
-### Installation
+## Installation
To install, add the **spark.jar** file to your servers plugins/mods directory, and then restart your server.
-### Commands
+## Commands
All commands require the `spark.profiler` permission.
@@ -85,4 +93,10 @@ Ends the current profiling operation, uploads the resultant data, and returns a
___
#### `/profiler cancel`
-Cancels the current profiling operation, and discards any recorded data without uploading it. \ No newline at end of file
+Cancels the current profiling operation, and discards any recorded data without uploading it.
+
+## License
+
+spark is a fork of [WarmRoast](https://github.com/sk89q/WarmRoast), which is [licensed under the GNU General Public License](https://github.com/sk89q/WarmRoast/blob/3fe5e5517b1c529d95cf9f43fd8420c66db0092a/src/main/java/com/sk89q/warmroast/WarmRoast.java#L1-L17).
+
+Therefore, spark is also licensed under the GNU General Public License. \ No newline at end of file
diff --git a/common/src/main/java/com/sk89q/warmroast/Sampler.java b/common/src/main/java/com/sk89q/warmroast/Sampler.java
index 6c4f60c..a141f82 100644
--- a/common/src/main/java/com/sk89q/warmroast/Sampler.java
+++ b/common/src/main/java/com/sk89q/warmroast/Sampler.java
@@ -25,31 +25,36 @@ import java.lang.management.ManagementFactory;
import java.lang.management.ThreadInfo;
import java.lang.management.ThreadMXBean;
import java.util.ArrayList;
+import java.util.Collections;
import java.util.HashMap;
+import java.util.IdentityHashMap;
import java.util.List;
import java.util.Map;
+import java.util.Set;
import java.util.Timer;
import java.util.TimerTask;
import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutorService;
/**
* Main sampler class.
*/
public class Sampler extends TimerTask {
- /**
- * The thread management interface for the current JVM
- */
+ /** The thread management interface for the current JVM */
private final ThreadMXBean threadBean = ManagementFactory.getThreadMXBean();
- /**
- * A map of root stack nodes for each thread with sampling data
- */
+ /** A map of root stack nodes for each thread with sampling data */
private final Map<String, StackNode> threadData = new HashMap<>();
- /**
- * A future to encapsulation the completion of this sampler instance
- */
+ /** A set of recorded thread info that's yet to be inserted into the node structure */
+ private final Set<QueuedThreadInfo> pendingThreadData = Collections.synchronizedSet(Collections.newSetFromMap(new IdentityHashMap<>()));
+
+ /** The worker pool for inserting stack nodes */
+ private ExecutorService workerPool;
+ /** The lock object */
+ private final Object[] lock = new Object[0];
+ /** A future to encapsulation the completion of this sampler instance */
private final CompletableFuture<Sampler> future = new CompletableFuture<>();
/** The interval to wait between sampling, in milliseconds */
@@ -70,20 +75,47 @@ public class Sampler extends TimerTask {
/**
* Starts the sampler.
*
- * @param timer the timer to schedule the sampling on
+ * @param samplingThread the timer to schedule the sampling on
+ * @param workerPool the worker pool
*/
- public synchronized void start(Timer timer) {
- timer.scheduleAtFixedRate(this, 0, this.interval);
+ public void start(Timer samplingThread, ExecutorService workerPool) {
+ this.workerPool = workerPool;
+ samplingThread.scheduleAtFixedRate(this, 0, this.interval);
this.startTime = System.currentTimeMillis();
}
+ private void insertData(QueuedThreadInfo data) {
+ synchronized (this.lock) {
+ try {
+ StackNode node = this.threadData.computeIfAbsent(data.threadName, StackNode::new);
+ node.log(data.stack, Sampler.this.interval);
+ this.pendingThreadData.remove(data);
+ } catch (Exception e) {
+ e.printStackTrace();
+ // we need to remove the pending data even if the insert failed
+ this.pendingThreadData.remove(data);
+ }
+ }
+ }
+
/**
* Gets the sampling data recorded by this instance.
*
* @return the data
*/
public Map<String, StackNode> getData() {
- return this.threadData;
+ if (this.pendingThreadData.isEmpty()) {
+ return this.threadData;
+ }
+
+ // wait for all pending data to be inserted
+ while (true) {
+ synchronized (this.lock) {
+ if (this.pendingThreadData.isEmpty()) {
+ return this.threadData;
+ }
+ }
+ }
}
public long getStartTime() {
@@ -101,12 +133,8 @@ public class Sampler extends TimerTask {
return this.future;
}
- private StackNode getRootNode(String threadName) {
- return this.threadData.computeIfAbsent(threadName, StackNode::new);
- }
-
@Override
- public synchronized void run() {
+ public void run() {
try {
if (this.endTime != -1 && this.endTime <= System.currentTimeMillis()) {
this.future.complete(this);
@@ -123,8 +151,9 @@ public class Sampler extends TimerTask {
continue;
}
- StackNode node = getRootNode(threadName);
- node.log(stack, this.interval);
+ QueuedThreadInfo queuedData = new QueuedThreadInfo(threadName, stack);
+ this.pendingThreadData.add(queuedData);
+ this.workerPool.execute(queuedData);
}
} catch (Throwable t) {
this.future.completeExceptionally(t);
@@ -153,4 +182,19 @@ public class Sampler extends TimerTask {
return out;
}
+ private final class QueuedThreadInfo implements Runnable {
+ private final String threadName;
+ private final StackTraceElement[] stack;
+
+ private QueuedThreadInfo(String threadName, StackTraceElement[] stack) {
+ this.threadName = threadName;
+ this.stack = stack;
+ }
+
+ @Override
+ public void run() {
+ insertData(this);
+ }
+ }
+
}
diff --git a/common/src/main/java/me/lucko/spark/common/CommandHandler.java b/common/src/main/java/me/lucko/spark/common/CommandHandler.java
index aa7e47d..c015cb3 100644
--- a/common/src/main/java/me/lucko/spark/common/CommandHandler.java
+++ b/common/src/main/java/me/lucko/spark/common/CommandHandler.java
@@ -1,8 +1,9 @@
package me.lucko.spark.common;
+import com.google.common.util.concurrent.ThreadFactoryBuilder;
import com.google.gson.JsonObject;
-import com.sk89q.warmroast.ThreadDumper;
import com.sk89q.warmroast.Sampler;
+import com.sk89q.warmroast.ThreadDumper;
import me.lucko.spark.common.http.Bytebin;
@@ -14,6 +15,8 @@ import java.util.List;
import java.util.Map;
import java.util.Timer;
import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ExecutorService;
+import java.util.concurrent.Executors;
import java.util.concurrent.TimeUnit;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
@@ -32,7 +35,12 @@ public abstract class CommandHandler<T> {
/**
* The {@link Timer} being used by the {@link #activeSampler}.
*/
- private final Timer timer = new Timer("spark-sampling-thread", true);
+ private final Timer samplingThread = new Timer("spark-sampling-thread", true);
+
+ /**
+ * The worker {@link ExecutorService} being used by the {@link #activeSampler}.
+ */
+ private final ExecutorService workerPool = Executors.newCachedThreadPool(new ThreadFactoryBuilder().setNameFormat("spark-worker-%d").build());
/** Guards {@link #activeSampler} */
private final Object[] activeSamplerMutex = new Object[0];
@@ -97,16 +105,16 @@ public abstract class CommandHandler<T> {
int timeoutSeconds = parseInt(arguments, "timeout", "d");
if (timeoutSeconds != -1 && timeoutSeconds <= 10) {
- sendPrefixedMessage(sender, "&cThe specified timeout is not long enough for accurate results to be formed.");
+ sendPrefixedMessage(sender, "&cThe specified timeout is not long enough for accurate results to be formed. Please choose a value greater than 10.");
return;
}
- if (timeoutSeconds != -1 && timeoutSeconds < 100) {
- sendPrefixedMessage(sender, "&7The accuracy of the output will significantly improve when sampling is able to run for longer periods. Consider setting a value of timeout over 1-2 minutes.");
+ if (timeoutSeconds != -1 && timeoutSeconds < 30) {
+ sendPrefixedMessage(sender, "&7The accuracy of the output will significantly improve when sampling is able to run for longer periods. Consider setting a timeout value over 30 seconds.");
}
int intervalMillis = parseInt(arguments, "interval", "i");
- if (intervalMillis == -1) {
+ if (intervalMillis <= 0) {
intervalMillis = 10;
}
@@ -128,7 +136,7 @@ public abstract class CommandHandler<T> {
return;
}
- sendPrefixedMessage(sender, "&7Starting a new sampler task...");
+ sendPrefixedMessage(sender, "&7Initializing a new profiler, please wait...");
SamplerBuilder builder = new SamplerBuilder();
builder.threadDumper(threadDumper);
@@ -136,15 +144,20 @@ public abstract class CommandHandler<T> {
builder.completeAfter(timeoutSeconds, TimeUnit.SECONDS);
}
builder.samplingInterval(intervalMillis);
- sampler = this.activeSampler = builder.start(timer);
+ sampler = this.activeSampler = builder.start(this.samplingThread, this.workerPool);
- sendPrefixedMessage(sender, "&bSampling has begun!");
+ sendPrefixedMessage(sender, "&bProfiler now active!");
+ if (timeoutSeconds == -1) {
+ sendPrefixedMessage(sender, "&7Use '/profiler stop' to stop profiling and upload the results.");
+ } else {
+ sendPrefixedMessage(sender, "&7The results will be automatically returned after the profiler has been running for " + timeoutSeconds + " seconds.");
+ }
}
CompletableFuture<Sampler> future = sampler.getFuture();
// send message if profiling fails
- future.whenComplete((s, throwable) -> {
+ future.whenCompleteAsync((s, throwable) -> {
if (throwable != null) {
sendPrefixedMessage(sender, "&cSampling operation failed unexpectedly. Error: " + throwable.toString());
throwable.printStackTrace();
@@ -152,7 +165,7 @@ public abstract class CommandHandler<T> {
});
// set activeSampler to null when complete.
- future.whenComplete((s, throwable) -> {
+ future.whenCompleteAsync((s, throwable) -> {
synchronized (this.activeSamplerMutex) {
if (sampler == this.activeSampler) {
this.activeSampler = null;
@@ -218,7 +231,7 @@ public abstract class CommandHandler<T> {
JsonObject output = sampler.formOutput();
try {
String pasteId = Bytebin.postContent(output);
- sendPrefixedMessage(sender, "&bSampling results can be viewed here: &7" + VIEWER_URL + pasteId);
+ sendPrefixedMessage(sender, "&bSampling results: &7" + VIEWER_URL + pasteId);
} catch (IOException e) {
sendPrefixedMessage(sender, "&cAn error occurred whilst uploading the results.");
e.printStackTrace();
diff --git a/common/src/main/java/me/lucko/spark/common/SamplerBuilder.java b/common/src/main/java/me/lucko/spark/common/SamplerBuilder.java
index e6c0cf8..a15bcf6 100644
--- a/common/src/main/java/me/lucko/spark/common/SamplerBuilder.java
+++ b/common/src/main/java/me/lucko/spark/common/SamplerBuilder.java
@@ -1,10 +1,11 @@
package me.lucko.spark.common;
import com.google.common.base.Preconditions;
-import com.sk89q.warmroast.ThreadDumper;
import com.sk89q.warmroast.Sampler;
+import com.sk89q.warmroast.ThreadDumper;
import java.util.Timer;
+import java.util.concurrent.ExecutorService;
import java.util.concurrent.TimeUnit;
/**
@@ -35,9 +36,9 @@ public class SamplerBuilder {
return this;
}
- public Sampler start(Timer timer) {
- Sampler sampler = new Sampler(samplingInterval, threadDumper, timeout);
- sampler.start(timer);
+ public Sampler start(Timer samplingThread, ExecutorService workerPool) {
+ Sampler sampler = new Sampler(this.samplingInterval, this.threadDumper, this.timeout);
+ sampler.start(samplingThread, workerPool);
return sampler;
}