MySQL Error #1071 - Specified key was too long; max key length is 767 bytes
Asked Answered
R

37

669

When I executed the following command:

ALTER TABLE `mytable` ADD UNIQUE (
`column1` ,
`column2`
);

I got this error message:

#1071 - Specified key was too long; max key length is 767 bytes

Information about column1 and column2:

column1 varchar(20) utf8_general_ci
column2  varchar(500) utf8_general_ci

I think varchar(20) only requires 21 bytes while varchar(500) only requires 501 bytes. So the total bytes are 522, less than 767. So why did I get the error message?

#1071 - Specified key was too long; max key length is 767 bytes
Raglan answered 29/11, 2009 at 3:18 Comment(5)
Because its not 520 bytes, but rather, 2080 bytes, which far exceeds 767 bytes, you could do column1 varchar(20) and column2 varchar(170). if you want a character/byte equiv, use latin1Spinule
i think your calculation is a bit wrong here. mysql uses 1 or 2 extra bytes to record the values length: 1 byte if the column's max length is 255 bytes or less, 2 if it's longer than 255 bytes. the utf8_general_ci encoding needs 3 bytes per character so varchar(20) uses 61 bytes, varchar(500) uses 1502 bytes in total 1563 bytesPaunchy
mysql> select maxlen, character_set_name from information_schema.character_sets where character_set_name in('latin1', 'utf8', 'utf8mb4'); maxlen | character_set_name ------ | ------------------- 1 | latin1 ------ | ------------------- 3 | utf8 ------ | ------------------- 4 | utf8mb4Paunchy
'if you want a character/byte equiv, use latin1' Please don't do this. Latin1 really, really sucks. You will regret this.Contagion
Refer https://mcmap.net/q/67936/-how-to-fix-mysql-index-column-size-too-large-laravel-migrate for solutionWashery
N
617

767 bytes in MySQL version 5.6 (and prior versions), is the stated prefix limitation for InnoDB tables. It's 1,000 bytes long for MyISAM tables. This limit has been increased to 3072 bytes In MySQL version 5.7 (and upwards).

You also have to be aware that if you set an index on a big char or varchar field which is utf8mb4 encoded, you have to divide the max index prefix length of 767 bytes (or 3072 bytes) by 4 resulting in 191. This is because the maximum length of a utf8mb4 character is four bytes. For a utf8 character it would be three bytes resulting in max index prefix length of 255 (or minus null-terminator, 254 characters).

One option you have is to just place lower limit on your VARCHAR fields.

Another option (according to the response to this issue) is to get the subset of the column rather than the entire amount, i.e.:

ALTER TABLE `mytable` ADD UNIQUE ( column1(15), column2(200) );

Tweak as you need to get the key to apply, but I wonder if it would be worth it to review your data model regarding this entity to see if there's improvements possible, which would allow you to implement the intended business rules without hitting the MySQL limitation.

Norwegian answered 29/11, 2009 at 3:52 Comment(14)
To apply by specifying a subset of the column rather than the entire amount. A good solution.Raglan
@OMGPonies: Do you happen to know, if DB2/MSSQL/Oracle have the same limitation on index size? For example HSQL does not have such limitation...Preprandial
@dma_k: No experience with DB2, but I haven't experienced the issue for SQL Server or Oracle.Norwegian
This doesn't explain why fields well below the length limit are exceeding the length limit...Goodsized
I've tried editing in the information @Goodsized is missing above, which clearly considered missing by others as well, but it gets rejected as being more suitable as a comment. For those trying to understand why 500 + 20 > 767 see Stefan Endrullis' comment on Julien's answer.Beverie
This reply is great. Still, the information in the response by @anthony-rutledge should be included here (or mentioned, or linked). The encoding-to-char-size issue is very important to dimension the column sizes.Archaic
This can be a problem. For example: I have field name (255) and add unique to this field at name(191) as I'm using utf8mb4. If I have my user add their name with 'IJUE3ump5fiUuCi16jSofYS234MLschW4wsIktKiBrTPOTKBK6Vteh5pNuz1tKjy...aO500mlJs' And the other user add their name with this 'IJUE3ump5fiUuCi16jSofYS234MLschW4wsIktKiBrTPOTKBK6Vteh5pNuz1tKjy...aO500mlJa' The different is the last character. It should pass validation not stuck at duplicate entry.Tracee
The index limit is 767 bytes, not characters. And since Mysql's utf8mb4 character set (which the rest of the world calls utf8) needs (at most) 4 bytes per character you can only index up to VARCHAR(191). Mysql's utf8 character set (which the rest of the world calls broken) needs at most 3 bytes per character so if you are using that (which you shouldn't), you can index up to VARCHAR(255)Contagion
You may want to update your answer to state that InnoDB allows for 3072b key length if using Barracuda format with innodb_file_per_table and innodb_large_prefixGapes
In some cases, where you are sure about storing short data, you can reduce the VARCHAR's size. eg. 100 (max 191). I would prefer this instead of Indexing on first 100 characters of the source data from of length 255.Leoine
Are you sure the limit of 3072 only applies from 5.7 onwards? 5.6 has that tooIncandesce
You can find more information in the following gist. Including a way to reproduce it.Makings
I think placing lower limits on VARCHAR fields is more appropriate (although it requires more work).Cowhide
Making varchar value from 255 to 191 solved the issue in my case.Glyptodont
M
451

When you hit the limit. Set the following.

  • INNODB utf8 VARCHAR(255)
  • INNODB utf8mb4 VARCHAR(191)
Mideast answered 17/7, 2015 at 11:12 Comment(7)
because 767 / 4 ~= 191 , and 767 / 3 ~= 255Charissa
where and how to set it?Fecund
Yes, specifying ENGINE=InnoDB DEFAULT CHARSET=utf8 at the end of the CREATE TABLE statement I was able to have a VARCHAR(255) primary key. Thanks.Millan
Indeed! useful for some languages like Vietnamese which we must use utf8mb4 for the accuracy in search!Hobie
Or you can use mysql version 5.7 to keep utf8mb4 VARCHAR(255)Hobie
Following this answer, I searched all the VARCHAR(255) in my sql dump file and replaced them by VARCHAR(191) . This worked. I have no idea about possible sideeffect I used it to import data from a server to localhost.Flap
the utf8mb4 limit 191 is what most people neededMatlick
P
428

If anyone is having issues with InnoDB and utf8 charset trying to put a UNIQUE index on a VARCHAR(256) field, switch it to VARCHAR(255). It seems 255 is the limitation.

Phrixus answered 29/5, 2013 at 17:19 Comment(6)
The number of allowed characters just depends on your character set. UTF8 may use up to 3 bytes per character, utf8mb4 up to 4 bytes, and latin1 only 1 byte. Thus for utf8 your key length is limited to 255 characters, since 3*255 = 765 < 767.Thorvaldsen
AS Stefan Endrullis stated, it depends on the charset. If you use UTF8 which uses 3 bytes: 255x3=765 which is lower than the 767 limit, while 256x3=768 which is higher. But if you use UTF8mb4 its 255*4=1020, so its not a realy solutionMyeloid
for utf8mb4 i just set all my string indexes to length 190, helpedLevity
This answer is correct. However if 255 is indeed working for you, it means you are using Mysql utf8, which unfortunately is broken. It can only encode characters in the basic multilingual plane. You will get issues with characters falling outside of that. For example those Emoji characters they have been adding fall outside of it I think. So instead of switching to VARCHAR(255), switch to VARCHAR(191) and switch the encoding to utf8mb4 (which is actually just utf8, but MySql wanted to keep back. compat).Contagion
Not helpful for my multicolumn unique constraint. It also would not work for the OP's multicolumn unique constraint. (would give it a total size of 825 bytes)Bulldog
This answer does not hit at the root of the matter.Lucubration
R
148

MySQL assumes worst case for the number of bytes per character in the string. For the MySQL 'utf8' encoding, that's 3 bytes per character since that encoding doesn't allow characters beyond U+FFFF. For the MySQL 'utf8mb4' encoding, it's 4 bytes per character, since that's what MySQL calls actual UTF-8.

So assuming you're using 'utf8', your first column will take 60 bytes of the index, and your second another 1500.

Rainy answered 19/3, 2014 at 19:15 Comment(6)
— Which presumably means that when using utf8mb4, I need to set them to (at most) 191 as 191*4 = 764 < 767.Bombard
@Bombard Yes, exactly,Rainy
I think this might be the right answer, but could you elaborate on what one would need to do to correct such an issue? At least for MySQL noobs like me?Harmonica
There's no one way to get around this index limit. The question here is about unique constraints; for those, you can have one column of unlimited length text, and another where you store a hash (like MD5) of that text, and use the hash column for your unique constraint. You'll have to ensure your program keeps the hashes up-to-date when it alters the text, but there's various ways to handle that without too much trouble. Quite frankly, MySQL should implement such a thing itself so you don't have to; I wouldn't be surprised if MariaDB has something like that built-in.Rainy
A unique index on a very long varchar column is rare. Think about why you need it because it might be a design issue. If you just want an index on it for search, consider some 'keywords' field which fits within 191 characters, or split text into short description and long/complete text etc. Or if you really need full text search, consider using specialized software for it, such as Apache Lucene.Contagion
I'm not sure I'd call 191 characters "very long". I'm pretty sure I've got a few files with absolute paths longer than that. Also, note that this applies to any index of the column, not just one for a uniqueness constraint.Rainy
I
61

Solution For Laravel Framework

As per Laravel 5.4.* documentation; You have to set the default string length inside the boot method of the app/Providers/AppServiceProvider.php file as follows:

use Illuminate\Support\Facades\Schema;

public function boot() 
{
    Schema::defaultStringLength(191); 
}

Explanation of this fix, given by Laravel 5.4.* documentation:

Laravel uses the utf8mb4 character set by default, which includes support for storing "emojis" in the database. If you are running a version of MySQL older than the 5.7.7 release or MariaDB older than the 10.2.2 release, you may need to manually configure the default string length generated by migrations in order for MySQL to create indexes for them. You may configure this by calling the Schema::defaultStringLength method within your AppServiceProvider.

Alternatively, you may enable the innodb_large_prefix option for your database. Refer to your database's documentation for instructions on how to properly enable this option.

Intestine answered 23/7, 2017 at 20:35 Comment(3)
I've done some research and found different views. This problem arises because of the use of indexing "unique" in the email field in the users table. So simply adding a limit value of 250 can solve the problem. This value is added to the file migration which creates the users table on the line with the code "unique ()" so it looks like this: $table->string('email', 250)->unique ();Rex
its laravel 10 and this problem still persists.Industry
I just installed laravel 10 but face same issue and only this solution works. Why laravel team didn't fixed this starting form version 5.4 to 10 and waiting for developer to fix this.Goosy
C
60

run this query before your query:

SET @@global.innodb_large_prefix = 1;

this will increase limit to 3072 bytes.

Callista answered 4/4, 2014 at 21:5 Comment(7)
Is there any downside to changing to innodb_large_prefix globally? And is that DB "global" or all DBs TOTALLY GLOBAL?Ricciardi
Only applies when using non-standard row formats. See dev.mysql.com/doc/refman/5.5/en/…. Specifically, 'Enable this option to allow index key prefixes longer than 767 bytes (up to 3072 bytes), for InnoDB tables that use the DYNAMIC and COMPRESSED row formats.' The default row format is unaffected.Downwash
This worked well for me - more details and a guide can be found here: mechanics.flite.com/blog/2014/07/29/…Sufferable
One wonders about the hit on indexing and lookup if a key field is too large.Lucubration
do we need to restart mysql server after this ?Tko
Important missing details from this answer. innodb_file_format must be BARRACUDA and At the table level you have to use ROW_FORMAT=DYNAMIC or ROW_FORMAT=COMPRESSED. See @cwd's comment above with the guide.Shatzer
This can (and probably should) also be set in the server's configuration files (e.g. /etc/mysql/my.cnf or /etc/mysql/conf.d/innodb.cnf, etc.). If you do not make this change permanent, you will be bitten by it again if you try to CREATE or ALTER a table to create a new field of this kind.Bayly
I
40

What character encoding are you using? Some character sets (like UTF-16, et cetera) use more than one byte per character.

Indocile answered 29/11, 2009 at 3:21 Comment(6)
If it's UTF8, a character can use up to 4 bytes, so that 20 character column is 20 * 4 + 1 bytes, and the 500 char column is 500 * 4 + 2 bytesNemathelminth
For what it's worth, i just had the same problem and switching from utf8_general_ci to utf8_unicode_ci solved the problem for me. I do not know why though :(Feuilleton
For a VARCHAR(256) column with a UNIQUE index, changing collation had no effect for me, like it did for @Andresch. However, reducing the length from 256 to 255 did solve it. I don't understand why, as 767 / max 4 bytes per character would yield a maximum of 191?Caravette
255*3 = 765; 256*3 = 768. It appears your server was asssuming 3 bytes per character, @CaravetteIndocile
@Thanatos: utf8 uses up to 3. SELECT MAXLEN FROM information_schema.CHARACTER_SETS WHERE CHARACTER_SET_NAME = 'utf8';Tortilla
@Greg: You are correct, but this should be elaborated: UTF-8 itself uses 1–4 bytes per code point. MySQL's "character sets" (really encodings) has a character set called "utf8" that is able to encode some of UTF-8, and uses 1–3 bytes per code point, and is incapable of encoding code points outside the BMP. It also includes another character set called "utf8mb4", which uses 1–4 bytes per code point, and is capable of encoding all Unicode code points. (utf8mb4 is UTF-8, utf8 is a weird version of UTF-8.)Nemathelminth
P
38

Replace utf8mb4 with utf8 in your import file.

enter image description here

But note that utf8 charset is deprecated and it does not support all Unicode characters, e.g. emojis, so you will lose full Unicode support if you do this.

Pavel answered 17/7, 2019 at 23:56 Comment(3)
Why exactly should one do that? Additionally, if this has any downsides (which I assume), you should mention thisCruet
This will mean your column won't be able store some Unicode characters; most notable emojis.Gwalior
I believe you do not want to make this change. But if you do actually want this, in MySQL 5.6+ you should be using utf8mb3, not utf8. The latter is just an alias to the former, and the alias is planned to remap in the future. When using old/deprecated standards, denote that explicitly so you don't get a surprise "upgrade". Encouraged by 5.6, 5.7, and 8.0 docs.Nabal
V
31

I think varchar(20) only requires 21 bytes while varchar(500) only requires 501 bytes. So the total bytes are 522, less than 767. So why did I get the error message?

UTF8 requires 3 bytes per character to store the string, so in your case 20 + 500 characters = 20*3+500*3 = 1560 bytes which is more than allowed 767 bytes.

The limit for UTF8 is 767/3 = 255 characters, for UTF8mb4 which uses 4 bytes per character it is 767/4 = 191 characters.


There are two solutions to this problem if you need to use longer column than the limit:

  1. Use "cheaper" encoding (the one that requires less bytes per character)
    In my case, I needed to add Unique index on column containing SEO string of article, as I use only [A-z0-9\-] characters for SEO, I used latin1_general_ci which uses only one byte per character and so column can have 767 bytes length.
  2. Create hash from your column and use unique index only on that
    The other option for me was to create another column which would store hash of SEO, this column would have UNIQUE key to ensure SEO values are unique. I would also add KEY index to original SEO column to speed up look up.
Vasodilator answered 23/12, 2016 at 9:3 Comment(1)
This did the trick. I've had varchar(256) and I had to change it to varchar(250).Schaffner
T
29

The answer about why you get error message was already answered by many users here. My answer is about how to fix and use it as it be.

Refer from this link.

  1. Open MySQL client (or MariaDB client). It is a command line tool.
  2. It will ask your password, enter your correct password.
  3. Select your database by using this command use my_database_name;

Database changed

  1. set global innodb_large_prefix=on;

Query OK, 0 rows affected (0.00 sec)

  1. set global innodb_file_format=Barracuda;

Query OK, 0 rows affected (0.02 sec)

  1. Go to your database on phpMyAdmin or something like that for easy management. > Select database > View table structure > Go to Operations tab. > Change ROW_FORMAT to DYNAMIC and save changes.
  2. Go to table's structure tab > Click on Unique button.
  3. Done. Now it should has no errors.

The problem of this fix is if you export db to another server (for example from localhost to real host) and you cannot use MySQL command line in that server. You cannot make it work there.

Tracee answered 30/7, 2016 at 16:7 Comment(2)
if you are still error after enter above query, try to go to "phpmyadmin" > set the "collation" to your preference (for me i use "utf8_general_ci") > click apply (even if its already utf8)Swarthy
I'm not using a tool that works with your instructions, but I'm still voting it up for trying to help people with actually fixing the problem. There are endless explanations of what causes the problem, but remarkably little on how to actually solve it.Iyre
L
23
Specified key was too long; max key length is 767 bytes

You got that message because 1 byte equals 1 character only if you use the latin-1 character set. If you use utf8, each character will be considered 3 bytes when defining your key column. If you use utf8mb4, each character will be considered to be 4 bytes when defining your key column. Thus, you need to multiply your key field's character limit by, 1, 3, or 4 (in my example) to determine the number of bytes the key field is trying to allow. If you are using uft8mb4, you can only define 191 characters for a native, InnoDB, primary key field. Just don't breach 767 bytes.

Lucubration answered 13/10, 2015 at 12:49 Comment(0)
J
19

5 workarounds:

The limit was raised in 5.7.7 (MariaDB 10.2.2?). And it can be increased with some work in 5.6 (10.1).

If you are hitting the limit because of trying to use CHARACTER SET utf8mb4. Then do one of the following (each has a drawback) to avoid the error:

⚈  Upgrade to 5.7.7 for 3072 byte limit -- your cloud may not provide this;
⚈  Change 255 to 191 on the VARCHAR -- you lose any values longer than 191 characters (unlikely?);
⚈  ALTER .. CONVERT TO utf8 -- you lose Emoji and some of Chinese;
⚈  Use a "prefix" index -- you lose some of the performance benefits.
⚈  Or... Stay with older version but perform 4 steps to raise the limit to 3072 bytes:

SET GLOBAL innodb_file_format=Barracuda;
SET GLOBAL innodb_file_per_table=1;
SET GLOBAL innodb_large_prefix=1;
logout & login (to get the global values);
ALTER TABLE tbl ROW_FORMAT=DYNAMIC;  -- (or COMPRESSED)

-- http://mysql.rjweb.org/doc.php/limits#767_limit_in_innodb_indexes

Jabber answered 10/10, 2018 at 22:23 Comment(0)
D
18

you could add a column of the md5 of long columns

Dancy answered 25/2, 2011 at 4:18 Comment(2)
Note that this will not allow you to do range scans over these columns. Prefix lengths on VARCHARs will allow you to keep this trait, while causing possibly spurious matches in the index (and scanning and row lookup to eliminate them) (see the accepted answer). (This is essentially a manually implemented hash index, which sadly MysQL doesn't support with InnoDB tables.)Nemathelminth
I couldn't use prefix indices because I needed to maintain compatibility with H2 for testing purposes, and found using a hash column works well. I would strongly recommend using a collision resistant function such as SHA1 instead of MD5 to prevent malicious users from creating collisions. An index collision could be exploited to leak data if one of your queries only checks the hash value, and not the full column value.Instrumentation
B
16

For laravel 5.7 to 9.0

Steps to followed

  1. Go to App\Providers\AppServiceProvider.php.
  2. Add this to provider use Illuminate\Support\Facades\Schema; in top.
  3. Inside the Boot function Add this Schema::defaultStringLength(191);

that all, Enjoy.

Brent answered 26/6, 2018 at 9:9 Comment(1)
This is a bad solution. Reason: indexes are not meant to be infinitely-long. When you apply a unique index to something, you want the index to have fixed-with most of the time. That means you SHOULDN'T make stuff like email unique, but you should hash the email and make that one unique. Unlike raw string-data, hashes are fixed width and can be indexed and made unique without issues. Instead of understanding the problem, you're spreading awful practices that don't scale.Jackhammer
C
15

We encountered this issue when trying to add a UNIQUE index to a VARCHAR(255) field using utf8mb4. While the problem is outlined well here already, I wanted to add some practical advice for how we figured this out and solved it.

When using utf8mb4, characters count as 4 bytes, whereas under utf8, they could as 3 bytes. InnoDB databases have a limit that indexes can only contain 767 bytes. So when using utf8, you can store 255 characters (767/3 = 255), but using utf8mb4, you can only store 191 characters (767/4 = 191).

You're absolutely able to add regular indexes for VARCHAR(255) fields using utf8mb4, but what happens is the index size is truncated at 191 characters automatically - like unique_key here:

Sequel Pro screenshot showing index truncated at 191 characters

This is fine, because regular indexes are just used to help MySQL search through your data more quickly. The whole field doesn't need to be indexed.

So, why does MySQL truncate the index automatically for regular indexes, but throw an explicit error when trying to do it for unique indexes? Well, for MySQL to be able to figure out if the value being inserted or updated already exists, it needs to actually index the whole value and not just part of it.

At the end of the day, if you want to have a unique index on a field, the entire contents of the field must fit into the index. For utf8mb4, this means reducing your VARCHAR field lengths to 191 characters or less. If you don't need utf8mb4 for that table or field, you can drop it back to utf8 and be able to keep your 255 length fields.

Circumspection answered 15/12, 2016 at 23:59 Comment(0)
N
15

I fixed this issue with :

varchar(200) 

replaced with

varchar(191)

all the unique or primary varchar keys which have more than 200 replace them with 191 or set them as text.

Neediness answered 9/4, 2018 at 6:48 Comment(1)
This is what worked for me too. I had a varchar(250), but the data was never that long. Changed it to varchar(100). Thanks for the idea :)Kermitkermy
N
14

Here is my original answer:

I just drop database and recreate like this, and the error is gone:

drop database if exists rhodes; create database rhodes default CHARACTER set utf8 default COLLATE utf8_general_ci;

However, it doesn't work for all the cases.

It is actually a problem of using indexes on VARCHAR columns with the character set utf8 (or utf8mb4), with VARCHAR columns that have more than a certain length of characters. In the case of utf8mb4, that certain length is 191.

Please refer to the Long Index section in this article for more information how to use long indexes in MySQL database: http://hanoian.com/content/index.php/24-automate-the-converting-a-mysql-database-character-set-to-utf8mb4

Neron answered 15/2, 2017 at 4:50 Comment(1)
Solved the problem for openmeetings setup (BTW you saved my night :-)Schlep
S
10

To fix that, this works for me like a charm.

ALTER DATABASE dbname CHARACTER SET utf8 COLLATE utf8_general_ci;
Shelburne answered 12/4, 2019 at 12:48 Comment(2)
I confirm that my issue was the database collation too. I don't have any explanation for it. I use mysql v5.6.32 and DB collation was utf8mb4_unicode_ci.Civilize
try to add explain using phpmyadmin see i.imgur.com/fJzm4oE.pngAmbivert
G
8

I did some search on this topic finally got some custom change

For MySQL workbench 6.3.7 Version Graphical inter phase is available

  1. Start Workbench and select the connection.
  2. Go to management or Instance and select Options File.
  3. If Workbench ask you permission to read configuration file and then allow it by pressing OK two times.
  4. At center place Administrator options file window comes.
  5. Go To InnoDB tab and check the innodb_large_prefix if it not checked in the General section.
  6. set innodb_default_row_format option value to DYNAMIC.

For Versions below 6.3.7 direct options are not available so need to go with command prompt

  1. Start CMD as administrator.
  2. Go To director where mysql server is install Most of cases its at "C:\Program Files\MySQL\MySQL Server 5.7\bin" so command is "cd \" "cd Program Files\MySQL\MySQL Server 5.7\bin".
  3. Now Run command mysql -u userName -p databasescheema Now it asked for password of respective user. Provide password and enter into mysql prompt.
  4. We have to set some global settings enter the below commands one by one set global innodb_large_prefix=on; set global innodb_file_format=barracuda; set global innodb_file_per_table=true;
  5. Now at the last we have to alter the ROW_FORMAT of required table by default its COMPACT we have to set it to DYNAMIC.
  6. use following command alter table table_name ROW_FORMAT=DYNAMIC;
  7. Done
Galumph answered 29/9, 2016 at 10:18 Comment(6)
I can't found this: 6. set innodb_default_row_format option value to DYNAMIC.Immaculate
if I use set global innodb_default_row_format = DYNAMIC; I see this message: ERROR 1193 (HY000): Unknown system variable 'innodb_default_row_format'Immaculate
how you got error from workbench to cmd? I did it from workbench it has option directly.Galumph
I do it from CMD because I don't see the option in WorkbenchImmaculate
Is you Launched CMD as Admin?Galumph
I see the problem this var was introduced in v5.7.9 and I have v5.6.33 version, thanksImmaculate
D
7

Index Lengths & MySQL / MariaDB


Laravel uses the utf8mb4 character set by default, which includes support for storing "emojis" in the database. If you are running a version of MySQL older than the 5.7.7 release or MariaDB older than the 10.2.2 release, you may need to manually configure the default string length generated by migrations in order for MySQL to create indexes for them. You may configure this by calling the Schema::defaultStringLength method within your AppServiceProvider:

use Illuminate\Support\Facades\Schema;

/**
 * Bootstrap any application services.
 *
 * @return void
 */
public function boot()
{
    Schema::defaultStringLength(191);
}

Alternatively, you may enable the innodb_large_prefix option for your database. Refer to your database's documentation for instructions on how to properly enable this option.

Reference from blog : https://www.scratchcode.io/specified-key-too-long-error-in-laravel/

Reference from Official laravel documentation : https://laravel.com/docs/5.7/migrations

Donalt answered 21/2, 2019 at 9:32 Comment(0)
S
6

change your collation. You can use utf8_general_ci that supports almost all

Skater answered 29/5, 2017 at 9:15 Comment(1)
"Almost" is a pretty good hint that this is not a long-term solutionCruet
D
6

Just changing utf8mb4 to utf8 when creating tables solved my problem. For example: CREATE TABLE ... DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci; to CREATE TABLE ... DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;.

Dynamotor answered 21/7, 2018 at 15:31 Comment(0)
C
6

This solved my issue

ALTER DATABASE dbname CHARACTER SET utf8 COLLATE utf8_general_ci;

Cheep answered 7/7, 2020 at 16:16 Comment(0)
P
4

Based on the column given below, those 2 variable string columns are using utf8_general_ci collation (utf8 charset is implied).

In MySQL, utf8 charset uses a maximum of 3 bytes for each character. Thus, it would need to allocate 500*3=1500 bytes, which is much greater than the 767 bytes MySQL allows. That's why you are getting this 1071 error.

In other words, you need to calculate the character count based on the charset's byte representation as not every charset is a single byte representation (as you presumed.) I.E. utf8 in MySQL is uses at most 3-byte per character, 767/3≈255 characters, and for utf8mb4, an at most 4-byte representation, 767/4≈191 characters.

It's also known that MySQL

column1 varchar(20) utf8_general_ci
column2  varchar(500) utf8_general_ci
Patience answered 2/12, 2015 at 18:20 Comment(0)
A
3

I found this query useful for detecting which columns had an index violating the max key length:

SELECT
  c.TABLE_NAME As TableName,
  c.COLUMN_NAME AS ColumnName,
  c.DATA_TYPE AS DataType,
  c.CHARACTER_MAXIMUM_LENGTH AS ColumnLength,
  s.INDEX_NAME AS IndexName
FROM information_schema.COLUMNS AS c
INNER JOIN information_schema.statistics AS s
  ON s.table_name = c.TABLE_NAME
 AND s.COLUMN_NAME = c.COLUMN_NAME 
WHERE c.TABLE_SCHEMA = DATABASE()
  AND c.CHARACTER_MAXIMUM_LENGTH > 191 
  AND c.DATA_TYPE IN ('char', 'varchar', 'text')
Africanist answered 16/11, 2017 at 10:21 Comment(0)
M
3

In my case, I had this problem when I was backing up a database using the linux redirection output/input characters. Therefore, I change the syntax as described below. PS: using a linux or mac terminal.

Backup (without the > redirect)

# mysqldump -u root -p databasename -r bkp.sql

Restore (without the < redirect )

# mysql -u root -p --default-character-set=utf8 databasename
mysql> SET names 'utf8'
mysql> SOURCE bkp.sql

The error "Specified key was too long; max key length is 767 bytes" simple disappeared.

Marasmus answered 14/5, 2018 at 18:28 Comment(0)
S
2

Please check if sql_mode is like

sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

if it is, change to

sql_mode=NO_ENGINE_SUBSTITUTION

OR

restart your server changing your my.cnf file (putting following)

innodb_large_prefix=on
Sweatt answered 4/6, 2014 at 16:25 Comment(1)
innodb_large_prefix is deprecated in Mysql 8.0 linkOrvil
S
2

Due to prefix limitations this error will occur. 767 bytes is the stated prefix limitation for InnoDB tables in MySQL versions before 5.7 . It's 1,000 bytes long for MyISAM tables. In MySQL version 5.7 and upwards this limit has been increased to 3072 bytes.

Running the following on the service giving you the error should resolve your issue. This has to be run in the MYSQL CLI.

SET GLOBAL innodb_file_format=Barracuda;
SET GLOBAL innodb_file_per_table=on;
SET GLOBAL innodb_large_prefix=on;
Scarito answered 18/1, 2019 at 15:58 Comment(0)
W
2

The Problem

There are max key length limits in MySQL.

  • InnoDB — max key length is 1,536 bytes (for 8kb page size) and 768 (for 4kb page size) (Source: Dev.MySQL.com).
  • MyISAM — max key length is 1,000 bytes (Source Dev.MySQL.com).

These are counted in bytes! So, a UTF-8 character may take more than one byte to be stored into the key.

Therefore, you have only two immediate solutions:

  • Index only the first n'th characters of the text type.
  • Create a FULL TEXT search — Everything will be Searchable within the Text, in a fashion similar to ElasticSearch

Indexing the First N'th Characters of a Text Type

If you are creating a table, use the following syntax to index some field's first 255 characters: KEY sometextkey (SomeText(255)). Like so:

CREATE TABLE `MyTable` (
    `id` int(11) NOT NULL auto_increment,
    `SomeText` TEXT NOT NULL,
    PRIMARY KEY  (`id`),
    KEY `sometextkey` (`SomeText`(255))
);

If you already have the table, then you can add a unique key to a field with: ADD UNIQUE(ConfigValue(20));. Like so:

ALTER TABLE
MyTable
ADD UNIQUE(`ConfigValue`(20));

If the name of the field is not a reserved MySQL keyword, then the backticks (```) are not necessary around the fieldname.

Creating a FULL TEXT Search

A Full Text search will allow you to search the entirety of the value of your TEXT field. It will do whole-word matching if you use NATURAL LANGUAGE MODE, or partial word matching if you use one of the other modes. See more on the options for FullText here: Dev.MySQL.com

Create your table with the text, and add the Full text index...

ALTER TABLE
        MyTable
ADD FULLTEXT INDEX
        `SomeTextKey` (`SomeTextField` DESC);

Then search your table like so...

SELECT
        MyTable.id, MyTable.Title,
MATCH
        (MyTable.Text)
AGAINST
        ('foobar' IN NATURAL LANGUAGE MODE) AS score
FROM
        MyTable
HAVING
        score > 0
ORDER BY
        score DESC;
Witmer answered 19/7, 2021 at 15:9 Comment(0)
O
1

I have changes from varchar to nvarchar, works for me.

Ocieock answered 22/10, 2020 at 17:1 Comment(0)
C
1

My own solution for this problem was a bit more simple and less dangerous than lowering the VARCHAR size of tables.

Situation: A CentOS 7 server running Plesk Obsidian 18.0.37 with MariaDB 5.5. I was trying to import a MySQL dump from a server running MariaDB 10.1.

Solution: Upgrading from MariaDB 5.5 to 10.6.

The steps were roughly based on this guide and this one:

  1. mysqldump -u admin -p`cat /etc/psa/.psa.shadow` --all-databases --routines --triggers > /root/all-databases.sql
  2. systemctl stop mariadb
  3. cp -a /var/lib/mysql/ /var/lib/mysql_backup
  4. Configure MariaDB repositories according to the official guide
    Make sure you meet Plesk's minimum version requirements detailed here
  5. yum install MariaDB-client MariaDB-server MariaDB-compat MariaDB-shared
  6. systemctl start mariadb
    In my case, the server failed to start here with an error: "Can't start server: Bind on TCP/IP port. Got error: 22: Invalid argument".
    The fix was to replace bind-address as follows in /etc/my.cnf and re-run the command:
    [mysqld]
    # OLD (broken)
    #bind-address = ::ffff:127.0.0.1
    # NEW
    bind-address = 127.0.0.1
    
  7. MYSQL_PWD=`cat /etc/psa/.psa.shadow` mysql_upgrade -uadmin
  8. plesk sbin packagemng -sdf
  9. rm -f /etc/init.d/mysql
  10. systemctl daemon-reload
Cathe answered 29/9, 2021 at 4:8 Comment(0)
A
1

OK , in my situation , I have to restore a database file from mySQL 5.7 to mySQL 5.6, and I met this problem.

the root cause is the version incompatible, and and some column which is indexed is longer than 191 (default is 255)

so the solution is quite simple: make all the "indexed columns' length" to be a number less than 191 (e.g 180)

enter image description here

Anchorite answered 4/4, 2022 at 10:15 Comment(0)
U
0

If you're creating something like:

CREATE TABLE IF NOT EXISTS your_table (
  id int(7) UNSIGNED NOT NULL AUTO_INCREMENT,
  name varchar(256) COLLATE utf8mb4_bin NOT NULL,
  PRIMARY KEY (id),
  UNIQUE KEY name (name)
) ENGINE=INNODB DEFAULT CHARSET=utf8mb4 AUTO_INCREMENT=1 ROW_FORMAT=FIXED;

it should be something like

CREATE TABLE IF NOT EXISTS your_table (
      id int(7) UNSIGNED NOT NULL AUTO_INCREMENT,
      name varchar(256) COLLATE utf8mb4_bin NOT NULL,
      PRIMARY KEY (id)
    ) ENGINE=INNODB DEFAULT CHARSET=utf8mb4 AUTO_INCREMENT=1 ROW_FORMAT=FIXED;

but you need to check uniqueness of that column from code or adding a new column as an MD5 or SHA1 of the varchar column

Underhanded answered 20/9, 2014 at 12:6 Comment(2)
Then your unique key lost. I think the better way is simply reduce the length of name to 191.Koralle
Why to check for uniqueness programmatically if it's native DB feature?Billen
P
-1

For me, the issue of "#1071 - Specified key was too long; max key length is 767 bytes" got resolved after changing the primarykey / uniquekey combination by limiting the column size by 200.

ALTER TABLE `mytable` ADD UNIQUE (
`column1` (200) ,
`column2` (200)
);
Pham answered 18/3, 2016 at 16:36 Comment(1)
Can you explain where that number of 200 comes from, if other answers contain a lower number like 191?Cruet
P
-2

If you have changed innodb_log_file_size recently, try to restore the previous value which worked.

Protuberate answered 5/8, 2015 at 11:30 Comment(0)
C
-2

My fix to this very same issue was to add an option as a 3rd argument: charset

queryInterface.createTable(
  tableName,
  { /*... columns*/ },
  { charset: 'utf8' } 
)

Otherwise sequelize would create tables as utf8mb4.

Countermeasure answered 2/3, 2019 at 20:5 Comment(1)
Is there any problem in using utf8mb4?Cruet
Z
-4

Change CHARSET of the complaining index field to "latin1"
i.e. ALTER TABLE tbl CHANGE myfield myfield varchar(600) CHARACTER SET latin1 DEFAULT NULL;
latin1 takes one byte for one character instead of four

Zamarripa answered 27/12, 2013 at 13:14 Comment(6)
And what if he will try to insert non-latin1 characters?Billen
This is the worst thing to do. Never do that.Instance
in my case its on a column which stores path names with only hex characters ... so this might be a solution for someone. it really depends on what you want to store...Darnel
I had to downvote this, since using latin1 is not a solution in 2016AD. I still have horrible nightmares about the time we had to convert a database from latin1 to utf8 back in 2007. Ugh. Not pretty. At all.Geier
I had the same problem on a unique index on two columns where I store bitcoin addresses and transaction IDs. These will always be ASCII characters, so I will use latin1 on these columns. Upvoting.Eyeglasses
Depending on what is going to be stored on the columns, I see no problem with this solution. My database was migrated from latin1 to utf8, also had that nightmare, but considering this column would only store url or file paths for examble, I would consider this, if MySQL 5.7 wasn't an option. UpvotingCaecum

© 2022 - 2024 — McMap. All rights reserved.