As of webpack v4 the CommonsChunkPlugin is deprecated.
We have deprecated and removed CommonsChunkPlugin, and have replaced
it with a set of defaults and easily overridable API called
optimization.splitChunks
.
webpack.optimize.CommonsChunkPlugin has been removed,
please use config.optimization.splitChunks instead.
Deprecated
You no longer need to use these plugins:
DedupePlugin has been removed too in v4
NoEmitOnErrorsPlugin -> optimization.noEmitOnErrors (on by default in production mode)
ModuleConcatenationPlugin -> optimization.concatenateModules (on by default in prod mode)
NamedModulesPlugin -> optimization.namedModules (on by default in dev mode)
Recommendations for webpack 4
Use mini-css-extract-plugin
instead of text-extract-plugin
.
Use webpack-bundle-analyzer
to analyze your bundled output in graphical way.
Entry scripts are real "Entry-Scripts" to your application, don't add vendor files explicitly to entry:
in webpack.config.js
.
SPA apps have one entry and Multi-Page-Apps like classic ASP.NET MVC
apps have multiple entry points. Webpack will build a dependenc graph out of your entry scripts and generate optimized bundles for your app.
If you want to migrate from an older webpack version, it's best to checkout the migration guide
Tree shaking (dead code elimination) is only enabled in production mode.
Webpack 4, the new way of bundling assets
(You have to remove your CommonsChunkPlugin-thinking from your head)
!!! Meanwhile the webpack doc has been updated,
a section SplitChunks
was added !!!
It follows a new philosophy:
Webpack 4 now by default does optimizations automatically. It analyzes your dependency graph and creates optimal bundles (output), based on the following conditions:
- New chunk can be shared OR modules are from the node_modules folder
- New chunk would be bigger than 30kb (before min+gz)
- Maximum number of parallel request when loading chunks on demand <= 5
- Maximum number of parallel request at initial page load <= 3
All this can be tweaked using the SplitChunksPlugin! (see SplitChunksPlugin documentation)
A more detailed explanation on how to use the new optimization.splitChunks
API.
CommonsChunkPlugin was removed because it has a lot of problems:
- It can result in more code being downloaded than needed.
- It’s inefficient on async chunks.
- It’s difficult to use.
- The implementation is difficult to understand.
The SplitChunksPlugin also has some great properties:
- It never downloads unneeded module (as long you don’t enforce chunk merging via name)
- It works efficient on async chunks too
- It’s on by default for async chunks
- It handles vendor splitting with multiple vendor chunks
- It’s easier to use
- It doesn’t rely on chunk graph hacks
- Mostly automatic
--> Source
Regarding your issue, you want to split all deps of entry1 and entry2 into separate bundles.
optimization: {
splitChunks: {
cacheGroups: {
"entry1-bundle": {
test: /.../, // <-- use the test property to specify which deps go here
chunks: "all",
name: "entry1-bundle",
/** Ignore minimum size, minimum chunks and maximum requests and always create chunks for this cache group */
enforce: true,
priority: .. // use the priority, to tell where a shared dep should go
},
"entry2-bundle": {
test: /..../, // <-- use the test property to specify which deps go here
chunks: "all",
name: "entry2-bundle",
enforce: true,
priority: ..
}
}
}
},
If you don't add the optimization:splitChunks entry the default setting is as follows:
splitChunks: {
chunks: 'async',
minSize: 30000,
minRemainingSize: 0,
maxSize: 0,
minChunks: 1,
maxAsyncRequests: 6,
maxInitialRequests: 4,
automaticNameDelimiter: '~',
automaticNameMaxLength: 30,
cacheGroups: {
vendors: {
test: /[\\/]node_modules[\\/]/,
priority: -10
},
default: {
minChunks: 2,
priority: -20,
reuseExistingChunk: true
}
}
}
You can set optimization.splitChunks.cacheGroups.default to false to disable the default cache group, same for vendors cache group!
Here are some other SplitChunks configuration examples with explanation.
Up-to-date interface implementations for
SplitChunksOptions
,
CachGroupOptions
and
Optimization
can be found
here.
The interface definitions below may not be 100% accurate, but good for a simple overview:
SplitChunksOptions
interface
interface SplitChunksOptions {
/** Select chunks for determining shared modules (defaults to \"async\", \"initial\" and \"all\" requires adding these chunks to the HTML) */
chunks?: "initial" | "async" | "all" | ((chunk: compilation.Chunk) => boolean);
/** Minimal size for the created chunk */
minSize?: number;
/** Minimum number of times a module has to be duplicated until it's considered for splitting */
minChunks?: number;
/** Maximum number of requests which are accepted for on-demand loading */
maxAsyncRequests?: number;
/** Maximum number of initial chunks which are accepted for an entry point */
maxInitialRequests?: number;
/** Give chunks created a name (chunks with equal name are merged) */
name?: boolean | string | ((...args: any[]) => any);
/** Assign modules to a cache group (modules from different cache groups are tried to keep in separate chunks) */
cacheGroups?: false | string | ((...args: any[]) => any) | RegExp | { [key: string]: CacheGroupsOptions };
}
CacheGroupsOptions
interface:
interface CacheGroupsOptions {
/** Assign modules to a cache group */
test?: ((...args: any[]) => boolean) | string | RegExp;
/** Select chunks for determining cache group content (defaults to \"initial\", \"initial\" and \"all\" requires adding these chunks to the HTML) */
chunks?: "initial" | "async" | "all" | ((chunk: compilation.Chunk) => boolean);
/** Ignore minimum size, minimum chunks and maximum requests and always create chunks for this cache group */
enforce?: boolean;
/** Priority of this cache group */
priority?: number;
/** Minimal size for the created chunk */
minSize?: number;
/** Minimum number of times a module has to be duplicated until it's considered for splitting */
minChunks?: number;
/** Maximum number of requests which are accepted for on-demand loading */
maxAsyncRequests?: number;
/** Maximum number of initial chunks which are accepted for an entry point */
maxInitialRequests?: number;
/** Try to reuse existing chunk (with name) when it has matching modules */
reuseExistingChunk?: boolean;
/** Give chunks created a name (chunks with equal name are merged) */
name?: boolean | string | ((...args: any[]) => any);
}
Optimization
Interface
interface Optimization {
/**
* Modules are removed from chunks when they are already available in all parent chunk groups.
* This reduces asset size. Smaller assets also result in faster builds since less code generation has to be performed.
*/
removeAvailableModules?: boolean;
/** Empty chunks are removed. This reduces load in filesystem and results in faster builds. */
removeEmptyChunks?: boolean;
/** Equal chunks are merged. This results in less code generation and faster builds. */
mergeDuplicateChunks?: boolean;
/** Chunks which are subsets of other chunks are determined and flagged in a way that subsets don’t have to be loaded when the bigger chunk has been loaded. */
flagIncludedChunks?: boolean;
/** Give more often used ids smaller (shorter) values. */
occurrenceOrder?: boolean;
/** Determine exports for each module when possible. This information is used by other optimizations or code generation. I. e. to generate more efficient code for export * from. */
providedExports?: boolean;
/**
* Determine used exports for each module. This depends on optimization.providedExports. This information is used by other optimizations or code generation.
* I. e. exports are not generated for unused exports, export names are mangled to single char identifiers when all usages are compatible.
* DCE in minimizers will benefit from this and can remove unused exports.
*/
usedExports?: boolean;
/**
* Recognise the sideEffects flag in package.json or rules to eliminate modules. This depends on optimization.providedExports and optimization.usedExports.
* These dependencies have a cost, but eliminating modules has positive impact on performance because of less code generation. It depends on your codebase.
* Try it for possible performance wins.
*/
sideEffects?: boolean;
/** Tries to find segments of the module graph which can be safely concatenated into a single module. Depends on optimization.providedExports and optimization.usedExports. */
concatenateModules?: boolean;
/** Finds modules which are shared between chunk and splits them into separate chunks to reduce duplication or separate vendor modules from application modules. */
splitChunks?: SplitChunksOptions | false;
/** Create a separate chunk for the webpack runtime code and chunk hash maps. This chunk should be inlined into the HTML */
runtimeChunk?: boolean | "single" | "multiple" | RuntimeChunkOptions;
/** Avoid emitting assets when errors occur. */
noEmitOnErrors?: boolean;
/** Instead of numeric ids, give modules readable names for better debugging. */
namedModules?: boolean;
/** Instead of numeric ids, give chunks readable names for better debugging. */
namedChunks?: boolean;
/** Defines the process.env.NODE_ENV constant to a compile-time-constant value. This allows to remove development only code from code. */
nodeEnv?: string | false;
/** Use the minimizer (optimization.minimizer, by default uglify-js) to minimize output assets. */
minimize?: boolean;
/** Minimizer(s) to use for minimizing the output */
minimizer?: Array<Plugin | Tapable.Plugin>;
/** Generate records with relative paths to be able to move the context folder". */
portableRecords?: boolean;
}
}