Assembly Language (x86): How to create a loop to calculate Fibonacci sequence
Asked Answered
J

3

7

I am programming assembly language (x86) in MASM using Visual Studio 2013 Ultimate. I am trying to use an array to calculate a Fibonacci sequence for n elements using an array. In other words, I am trying to go to an array element, obtain the two elements before it, add those up, and store the result in another array.

I am having trouble setting up the index registers to make this work.

I have my program setup like this:

TITLE fibonacci.asm

INCLUDE Irvine32.inc

.data
    fibInitial  BYTE 0, 1, 2, 3, 4, 5, 6
    fibComputed BYTE 5 DUP(0)

.code
main PROC

    MOVZX si, fibInitial
    MOVZX di, fibComputed
    MOV   cl, LENGTHOF fibInitial

L1:
    MOV   ax, [si - 1]
    MOV   dx, [si - 2]
    MOV   bp, ax + dx
    MOV   dl, TYPE fibInitial
    MOVZX si, dl
    MOV   [edi], bp
    MOV   dh, TYPE fibComputed
    MOVZX di, dl
    loop L1

exit
main ENDP
END main

I cannot compile this because of an error message that says "error A2031: must be index or base register" for the line MOV ebp, ax + dx. However, I'm certain that there are other logic errors I am overlooking.

Jeramyjerba answered 18/9, 2015 at 19:32 Comment(5)
MOV bp, ax + dx is not a valid x86 instruction. In 32-bit code you could use lea ebp, [eax + edx] (lea bp, [ax + dx] would not work, since [ax + dx] isn't a valid effective address). Note that ebp has a specific purpose in certain situations, so you might want to consider using a different reguster.Confiscable
Also, your attempts to read from [si - 1] and [si - 2] are incorrect. si does not hold a valid address at that point.Confiscable
@Confiscable How can I reference elements 1 or 2 below the current element of an array in a loop (ignore that there are no elements below 2 right now for fibInitial)?Jeramyjerba
I suggest that you start by reading an x86 assembly tutorial, such as Art Of Assembly, since you seem to have misunderstood some of the basics.Confiscable
Yup, I was going to start writing an answer, but there are so many mistakes it would be huge. Make sure you keep track of when you're using a mov reg, imm32 to put an address into a register, and when you're doing mov reg, [ addr ] to load data from memory.Lesko
L
9

related: Code-golf print the first 1000 digits of Fib(10**9): my x86 asm answer using an extended-precision adc loop, and converting binary to strings. The inner loop is optimized for speed, other parts for size.


Computing a Fibonacci sequence only requires keeping two pieces of state: the current and previous element. I have no idea what you're trying to do with fibInitial, other than counting its length. This isn't perl where you do for $n (0..5).

I know you're just learning asm at all, but I'm still going to talk about performance. There's not much reason to learn asm without knowing what's fast and what's not. If you don't need performance, let a compiler make the asm for you, from C sources. Also see the other links at https://stackoverflow.com/tags/x86/info

Using registers for your state simplifies the problem of needing to look at a[-1] while calculating a[1]. You start with curr=1, prev=0, and start with a[0] = curr. To produce the "modern" starting-with-zero sequence of Fibonacci numbers, start with curr=0, prev=1.

Lucky for you, I was just thinking about an efficient loop for fibonacci code recently, so I took the time to write up a complete function. See below for an unrolled and a vectorized version (saves on store instructions, but also makes 64bit ints fast even when compiling for a 32bit CPU):

; fib.asm
;void fib(int32_t *dest, uint32_t count);
; not-unrolled version.  See below for a version which avoids all the mov instructions
global fib
fib:
    ; 64bit SysV register-call ABI:
    ; args: rdi: output buffer pointer.  esi: count  (and you can assume the upper32 are zeroed, so using rsi is safe)

    ;; locals:  rsi: endp
    ;; eax: current   edx: prev
    ;; ecx: tmp
    ;; all of these are caller-saved in the SysV ABI, like r8-r11
    ;; so we can use them without push/pop to save/restore them.
    ;; The Windows ABI is different.

    test   esi, esi       ; test a reg against itself instead of cmp esi, 0
    jz     .early_out     ; count == 0.  

    mov    eax, 1         ; current = 1
    xor    edx, edx       ; prev    = 0

    lea    rsi, [rdi + rsi * 4]  ; endp = &out[count];  // loop-end pointer
    ;; lea is very useful for combining add, shift, and non-destructive operation
    ;; this is equivalent to shl rsi, 4  /  add rsi, rdi

align 16
.loop:                    ; do {
    mov    [rdi], eax     ;   *buf = current
    add    rdi, 4         ;   buf++

    lea    ecx, [rax + rdx] ; tmp = curr+prev = next_cur
    mov    edx,  eax      ; prev = curr
    mov    eax,  ecx      ; curr=tmp
 ;; see below for an unrolled version that doesn't need any reg->reg mov instructions

    ; you might think this would be faster:
    ; add  edx, eax    ; but it isn't
    ; xchg eax, edx    ; This is as slow as 3 mov instructions, but we only needed 2 thanks to using lea

    cmp    rdi, rsi       ; } while(buf < endp);
    jb    .loop           ; jump if (rdi BELOW rsi).  unsigned compare
    ;; the LOOP instruction is very slow, avoid it

.early_out:
    ret

An alternate loop condition could be

    dec     esi         ; often you'd use ecx for counts, but we had it in esi
    jnz     .loop

AMD CPUs can fuse cmp/branch, but not dec/branch. Intel CPUs can also macro-fuse dec/jnz. (Or signed less than zero / greater than zero). dec/inc don't update the Carry flag, so you can't use them with above/below unsigned ja/jb. I think the idea is that you could do an adc (add with carry) in a loop, using inc/dec for the loop counter to not disturb the carry flag, but partial-flags slowdowns make this bad on modern CPUs.

lea ecx, [eax + edx] needs an extra byte (address-size prefix), which is why I used a 32bit dest and a 64bit address. (Those are the default operand sizes for lea in 64bit mode). No direct impact on speed, just indirect through code size.

An alternate loop body could be:

    mov  ecx, eax      ; tmp=curr.  This stays true after every iteration
.loop:

    mov  [rdi], ecx
    add  ecx, edx      ; tmp+=prev  ;; shorter encoding than lea
    mov  edx, eax      ; prev=curr
    mov  eax, ecx      ; curr=tmp

Unrolling the loop to do more iterations would mean less shuffling. Instead of mov instructions, you just keep track of which register is holding which variable. i.e. you handle assignments with a sort of register renaming.

.loop:     ;; on entry:       ; curr:eax  prev:edx
    mov  [rdi], eax             ; store curr
    add  edx, eax             ; curr:edx  prev:eax
.oddentry:
    mov  [rdi + 4], edx         ; store curr
    add  eax, edx             ; curr:eax  prev:edx

    ;; we're back to our starting state, so we can loop
    add  rdi, 8
    cmp  rdi, rsi
    jb   .loop

The thing with unrolling is that you need to clean up any odd iterations that are left over. Power-of-two unroll factors can make the cleanup loop slightly easier, but adding 12 isn't any faster than adding 16. (See the previous revision of this post for a silly unroll-by-3 version using lea to produce curr + prev in a 3rd register, because I failed to realize that you don't actually need a temp. Thanks to rcgldr for catching that.)

See below for a complete working unrolled version which handles any count.


Test frontend (new in this version: a canary element to detect asm bugs writing past the end of the buffer.)

// fib-main.c
#include <stdio.h>
#include <stdint.h>
#include <stdlib.h>

void fib(uint32_t *buf, uint32_t count);

int main(int argc, const char *argv[]) {
    uint32_t count = 15;
    if (argc > 1) {
        count = atoi(argv[1]);
    }
    uint32_t buf[count+1]; // allocated on the stack
    // Fib overflows uint32 at count = 48, so it's not like a lot of space is useful

    buf[count] = 0xdeadbeefUL;
    // uint32_t count = sizeof(buf)/sizeof(buf[0]);
    fib(buf, count);
    for (uint32_t i ; i < count ; i++){
        printf("%u ", buf[i]);
    }
    putchar('\n');

    if (buf[count] != 0xdeadbeefUL) {
        printf("fib wrote past the end of buf: sentinel = %x\n", buf[count]);
    }
}

This code is fully working and tested (unless I missed copying a change in my local file back into the answer >.<):

peter@tesla:~/src/SO$ yasm -f elf64 fib.asm && gcc -std=gnu11 -g -Og fib-main.c fib.o
peter@tesla:~/src/SO$ ./a.out 48
1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 1597 2584 4181 6765 10946 17711 28657 46368 75025 121393 196418 317811 514229 832040 1346269 2178309 3524578 5702887 9227465 14930352 24157817 39088169 63245986 102334155 165580141 267914296 433494437 701408733 1134903170 1836311903 2971215073 512559680 

unrolled version

Thanks again to rcgldr for getting me thinking about how to handle odd vs. even count in the loop setup, rather than with a cleanup iteration at the end.

I went for branchless setup code, which adds 4 * count%2 to the starting pointer. That can be zero, but adding zero is cheaper than branching to see if we should or not. The Fibonacci sequence overflows a register very quickly, so keeping the prologue code tight and efficient is important, not just the code inside the loop. (If we're optimizing at all, we'd want to optimize for many calls with short length).

    ; 64bit SysV register-call ABI
    ; args: rdi: output buffer pointer.  rsi: count

    ;; locals:  rsi: endp
    ;; eax: current   edx: prev
    ;; ecx: tmp
    ;; all of these are caller-saved in the SysV ABI, like r8-r11
    ;; so we can use them without push/pop to save/restore them.
    ;; The Windows ABI is different.

;void fib(int32_t *dest, uint32_t count);  // unrolled version
global fib
fib:
    cmp    esi, 1
    jb     .early_out       ; count below 1  (i.e. count==0, since it's unsigned)

    mov    eax, 1           ; current = 1
    mov    [rdi], eax
    je     .early_out       ; count == 1, flags still set from cmp
    ;; need this 2nd early-out because the loop always does 2 iterations

;;; branchless handling of odd counts:
;;;   always do buf[0]=1, then start the loop from 0 or 1
;;; Writing to an address you just wrote to is very cheap
;;; mov/lea is about as cheap as best-case for branching (correctly-predicted test/jcc for count%2==0)
;;; and saves probably one unconditional jump that would be needed either in the odd or even branch

    mov    edx, esi         ;; we could save this mov by using esi for prev, and loading the end pointer into a different reg
    and    edx, eax         ; prev = count & 1 = count%2

    lea    rsi, [rdi + rsi*4] ; end pointer: same regardless of starting at 0 or 1

    lea    rdi, [rdi + rdx*4] ; buf += count%2
    ;; even count: loop starts at buf[0], with curr=1, prev=0
    ;; odd  count: loop starts at buf[1], with curr=1, prev=1

align 16  ;; the rest of this func is just *slightly* longer than 16B, so there's a lot of padding.  Tempting to omit this alignment for CPUs with a loop buffer.
.loop:                      ;; do {
    mov    [rdi], eax       ;;   *buf = current
             ; on loop entry: curr:eax  prev:edx
    add   edx, eax          ; curr:edx  prev:eax

;.oddentry: ; unused, we used a branchless sequence to handle odd counts
    mov   [rdi+4], edx
    add   eax, edx          ; curr:eax  prev:edx
                            ;; back to our starting arrangement
    add    rdi, 8           ;;   buf++
    cmp    rdi, rsi         ;; } while(buf < endp);
    jb    .loop

;   dec   esi   ;  set up for this version with sub esi, edx; instead of lea
;   jnz   .loop
.early_out:
    ret

To produce the starting-with-zero sequence, do

curr=count&1;   // and esi, 1
buf += curr;    // lea [rdi], [rdi + rsi*4]
prev= 1 ^ curr; // xor eax, esi

instead of the current

curr = 1;
prev = count & 1;
buf += count & 1;

We can also save a mov instruction in both versions by using esi to hold prev, now that prev depends on count.

  ;; loop prologue for sequence starting with 1 1 2 3
  ;; (using different regs and optimized for size by using fewer immediates)
    mov    eax, 1               ; current = 1
    cmp    esi, eax
    jb     .early_out           ; count below 1
    mov    [rdi], eax
    je     .early_out           ; count == 1, flags still set from cmp

    lea    rdx, [rdi + rsi*4]   ; endp
    and    esi, eax             ; prev = count & 1
    lea    rdi, [rdi + rsi*4]   ; buf += count & 1
  ;; eax:curr esi:prev    rdx:endp  rdi:buf
  ;; end of old code

  ;; loop prologue for sequence starting with 0 1 1 2
    cmp    esi, 1
    jb     .early_out           ; count below 1, no stores
    mov    [rdi], 0             ; store first element
    je     .early_out           ; count == 1, flags still set from cmp

    lea    rdx, [rdi + rsi*4]   ; endp
    mov    eax, 1               ; prev = 1
    and    esi, eax             ; curr = count&1
    lea    rdi, [rdi + rsi*4]   ; buf += count&1
    xor    eax, esi             ; prev = 1^curr
    ;; ESI:curr EAX:prev  (opposite of other setup)
  ;;

  ;; optimized for code size, NOT speed.  Prob. could be smaller, esp. if we want to keep the loop start aligned, and jump between before and after it.
  ;; most of the savings are from avoiding mov reg, imm32,
  ;; and from counting down the loop counter, instead of checking an end-pointer.
  ;; loop prologue for sequence starting with 0 1 1 2
    xor    edx, edx
    cmp    esi, 1
    jb     .early_out         ; count below 1, no stores
    mov    [rdi], edx         ; store first element
    je     .early_out         ; count == 1, flags still set from cmp

    xor    eax, eax  ; movzx after setcc would be faster, but one more byte
    shr    esi, 1             ; two counts per iteration, divide by two
  ;; shift sets CF = the last bit shifted out
    setc   al                 ; curr =   count&1
    setnc  dl                 ; prev = !(count&1)

    lea    rdi, [rdi + rax*4] ; buf+= count&1

  ;; extra uop or partial register stall internally when reading eax after writing al, on Intel (except P4 & silvermont)
  ;; EAX:curr EDX:prev  (same as 1 1 2 setup)
  ;; even count: loop starts at buf[0], with curr=0, prev=1
  ;; odd  count: loop starts at buf[1], with curr=1, prev=0

  .loop:
       ...
    dec  esi                  ; 1B smaller than 64b cmp, needs count/2 in esi
    jnz .loop
  .early_out:
    ret

vectorized:

The Fibonacci sequence isn't particularly parallelizable. There's no simple way to get F(i+4) from F(i) and F(i-4), or anything like that. What we can do with vectors is fewer stores to memory. Start with:

a = [f3 f2 f1 f0 ]   -> store this to buf
b = [f2 f1 f0 f-1]

Then a+=b; b+=a; a+=b; b+=a; produces:

a = [f7 f6 f5 f4 ]   -> store this to buf
b = [f6 f5 f4 f3 ]

This is less silly when working with two 64bit ints packed into a 128b vector. Even in 32bit code, you can use SSE to do 64bit integer math.

A previous version of this answer has an unfinished packed-32bit vector version that doesn't properly handle count%4 != 0. To load the first 4 values of the sequence, I used pmovzxbd so I didn't need 16B of data when I could use only 4B. Getting the first -1 .. 1 values of the sequence into vector registers is a lot easier, because there's only one non-zero value to load and shuffle around.

;void fib64_sse(uint64_t *dest, uint32_t count);
; using SSE for fewer but larger stores, and for 64bit integers even in 32bit mode
global fib64_sse
fib64_sse:
    mov eax, 1
    movd    xmm1, eax               ; xmm1 = [0 1] = [f0 f-1]
    pshufd  xmm0, xmm1, 11001111b   ; xmm0 = [1 0] = [f1 f0]

    sub esi, 2
    jae .entry  ; make the common case faster with fewer branches
    ;; could put the handling for count==0 and count==1 right here, with its own ret

    jmp .cleanup
align 16
.loop:                          ; do {
    paddq   xmm0, xmm1          ; xmm0 = [ f3 f2 ]
.entry:
    ;; xmm1: [ f0 f-1 ]         ; on initial entry, count already decremented by 2
    ;; xmm0: [ f1 f0  ]
    paddq   xmm1, xmm0          ; xmm1 = [ f4 f3 ]  (or [ f2 f1 ] on first iter)
    movdqu  [rdi], xmm0         ; store 2nd last compute result, ready for cleanup of odd count
        add     rdi, 16         ;   buf += 2
    sub esi, 2
        jae   .loop             ; } while((count-=2) >= 0);
    .cleanup:
    ;; esi <= 0 : -2 on the count=0 special case, otherwise -1 or 0

    ;; xmm1: [ f_rc   f_rc-1 ]  ; rc = count Rounded down to even: count & ~1
    ;; xmm0: [ f_rc+1 f_rc   ]  ; f(rc+1) is the value we need to store if count was odd
    cmp esi, -1
    jne   .out  ; this could be a test on the Parity flag, with no extra cmp, if we wanted to be really hard to read and need a big comment explaining the logic
    ;; xmm1 = [f1 f0]
    movhps  [rdi], xmm1         ; store the high 64b of xmm0.  There is no integer version of this insn, but that doesn't matter
    .out:
        ret

No point unrolling this further, the dep chain latency limits throughput, so we can always average storing one element per cycle. Reducing the loop overhead in uops can help for hyperthreading, but that's pretty minor.

As you can see, handling all the corner cases even when unrolling by two is quite complex to keep track of. It requires extra startup overhead, even when you're trying to optimize that to keep it to a minimum. It's easy to end up with a lot of conditional branches.

updated main:

#include <stdio.h>
#include <stdint.h>
#include <inttypes.h>
#include <stdlib.h>

#ifdef USE32
void fib(uint32_t *buf, uint32_t count);
typedef uint32_t buftype_t;
#define FMTx PRIx32
#define FMTu PRIu32
#define FIB_FN fib
#define CANARY 0xdeadbeefUL
#else
void fib64_sse(uint64_t *buf, uint32_t count);
typedef uint64_t buftype_t;
#define FMTx PRIx64
#define FMTu PRIu64
#define FIB_FN fib64_sse
#define CANARY 0xdeadbeefdeadc0deULL
#endif

#define xstr(s) str(s)
#define str(s) #s

int main(int argc, const char *argv[]) {
    uint32_t count = 15;
    if (argc > 1) {
        count = atoi(argv[1]);
    }
    int benchmark = argc > 2;

    buftype_t buf[count+1]; // allocated on the stack
    // Fib overflows uint32 at count = 48, so it's not like a lot of space is useful

    buf[count] = CANARY;
    // uint32_t count = sizeof(buf)/sizeof(buf[0]);
    if (benchmark) {
       int64_t reps = 1000000000 / count;
       for (int i=0 ; i<=reps ; i++)
           FIB_FN(buf, count);

    } else {
       FIB_FN(buf, count);
       for (uint32_t i ; i < count ; i++){
           printf("%" FMTu " ", buf[i]);
       }
       putchar('\n');
    }
    if (buf[count] != CANARY) {
        printf(xstr(FIB_FN) " wrote past the end of buf: sentinel = %" FMTx "\n", buf[count]);
    }
}

Performance

For count just below 8192 (fitting in L1d cache), the unrolled-by-two non-vector version runs near its theoretical-max throughput of 1 store per cycle (3.5 instructions per cycle), on my Sandybridge i5-2500k. 8192 * 4B/int = 32768 = L1 cache size. In practice, I see ~3.3 to ~3.4 insn / cycle. I'm counting the entire program with Linux perf stat, though, not just the tight loop. (Note the repeat loop calling the Fib function, so the majority of the program's time is spent in it, even though there's some startup overhead.)

Anyway, there's not really any point unrolling further. And obviously this stopped being a Fibonacci sequence after count=47, since we use uint32_t. However, for large count, the throughput is limited by memory bandwidth, down to ~2.6 insn / cycle. At this point we're basically looking at how to optimize memset.

The 64-bit-integer version using movdqu stores (fib64_sse) runs at 3 insns per cycle (one 128b store per two clocks) up to an array size of about 1.5 times L2 cache size. (i.e. ./fib64 49152). As the array size goes up to larger fractions of L3 cache size, performance decreases down to ~2 insn per cycle (one store per 3 clocks) at 3/4 of L3 cache size. It levels out to 1 store per 6 cycles at sizes > L3 cache.

So storing with vectors does better than scalar with arrays too big for L1d cache but which do fit in L2 cache.

Lesko answered 18/9, 2015 at 21:34 Comment(11)
You could have unrolled the loop to two iterations, alternating between ecx and edx with your example, as there's no need to keep a value in eax: | add ecx,edx | ... | add edx,ecx | .Stochastic
@rcgldr: Thanks! IDK how I didn't see that, and got hung up on using a 3rd piece of storage. (see my unrolled-by-3 version in the previous revision). I was looking at a non-unrolled C version that used a temp, and somehow failed to see that prev became unneeded in the same step that the new curr is produced. Updated my answer to simplify the unroll.Lesko
You could handle the odd number case up front by changing the initial values used for ecx and edx , then branch into the middle of the loop. To initialize: | mov edx,count | mov eax,1 | and edx,eax | sub eax,edx | (or reverse eax / edx, depending on loop).Stochastic
@rcgldr: branches are for weenies :P Another great suggestion, though. Updated with a branchless version (if you don't count the extra jcc near the very beginning, to special-case count==1 as well as count==0, but those will both be predicted perfectly unless someone actually calls this with count<=1. I got away with one fused compare-and-branch, and a second branch after a couple movs that don't affect flags :) This should be good even on CPUs that don't like to see multiple branches within a group of 4 insns. (we know decoding will start at the fn entry point.)Lesko
@rcgldr: en.wikipedia.org/wiki/Fibonacci_number says either way is valid. I think I could get the code to start at 0 by doing prev=1; curr=0;. For odd counts where we don't overwrite buf[0], prev=0; curr=1; So, curr=count&1; buf+=curr; prev=1 ^ curr;Lesko
In math, fib(0) = 0, but this is different that stating where Fibonacci numbers start from as mentioned in the wiki article, where they can start from fib(0) = 0, or from fib(1) = 1.Stochastic
@rcgldr: thanks, that finally makes sense. IDK if I'd ever noticed before, but the wikipedia article does say F_0 = 0 and F_1 = 1 a couple paragraphs down, when talking about where the sequence starts.Lesko
@rcgldr: I made a version that uses vectors to accumulate the sequence and store less frequently. To keep the amount of special cases and cleanup code down, I used 64bit ints, so there are only two elements per vector. It's the same speed whether it fits in L1 or only L2 cache. (long after Fib(94) overflowed a 64bit unsigned int...)Lesko
@revolution9540: updated my answer. I tried not to over-complicated the early parts, so it's still useful for a beginner.Lesko
@PeterCordes I really appreciate such a detailed answer. Very great insight. I ended up calculating the first three values as independent instructions outside of a loop, and then calculating the rest using a loop. I was able to do it using only mov, add, inc, and dec instructions in about 20 lines of code.Jeramyjerba
@revolution9540: cool, glad that helped. You'll probably learn some ideas or tricks from seeing how I used fewer instructions by carefully choosing how I handled all the special cases, so the same code did the needed action in multiple special cases. Esp. cutting down on branch instructions is a good idea, because taken branches (even correctly predicted) limit the CPUs ability to issue 4 uops (~= instructions) per clock. Seeing what's coming up is how CPUs find independent things they can do in parallel.Lesko
S
4

Considering that fib(93) = 12200160415121876738 is the largest value that will fit into a 64 bit unsigned integer, there may not be much point in trying to optimize this, unless calculating fib(n) modulo some (usually prime) number for large n.

There is a way to directly calculate fib(n) in log2(n) iterations, using a lucas sequence method or matrix method for fibonacci. The lucas sequence is faster and shown below. These could be modified to perform the math modulo some number.

/* lucas sequence method */
uint64_t fibl(int n) {
    uint64_t a, b, p, q, qq, aq;
    a = q = 1;
    b = p = 0;
    while(1){
        if(n & 1) {
            aq = a*q;
            a = b*q + aq + a*p;
            b = b*p + aq;
        }
        n >>= 1;
        if(n == 0)
            break;
        qq = q*q;
        q = 2*p*q + qq;
        p = p*p + qq;
    }
    return b;
}
Stochastic answered 21/9, 2015 at 0:46 Comment(1)
Interesting. I assumed there wasn't any fast way to compute fib(n). For my answer, I spent a lot of time optimizing the setup / cleanup so it's as fast as possible for short calls. My vector version does quite well I think, esp. if n is odd. Optimizing for low overhead with low n was interesting, and a lot harder than optimizing just the loop. (That part was interesting too, just to see what kind of results I could get for a computation that had that kind of dependency on previous computation, even though fib(n) itself isn't interesting after it overflows.. unless BigInt...)Lesko
A
0
.386
.model flat, stdcall
.stack 4096
ExitProcess proto, dwExitCode:dword

.data
    fib word 1, 1, 5 dup(?);you create an array with the number of the fibonacci series that you want to get
.code
main proc
    mov esi, offset fib ;set the stack index to the offset of the array.Note that this can also be set to 0
    mov cx, lengthof fib ;set the counter for the array to the length of the array. This keeps track of the number of times your loop will go

L1: ;start the loop
    mov ax, [esi]; move the first element to ax ;move the first element in the array to the ax register
    add ax, [esi + type fib]; add the second element to the value in ax. Which gives the next element in the series
    mov[esi + 2* type fib], ax; assign the addition to the third value in the array, i.e the next number in the fibonacci series
    add esi, type fib;increment the index to move to the next value
    loop L1; repeat

    invoke ExitProcess, 0
main endp
end main
Ayana answered 26/7, 2018 at 18:50 Comment(4)
Ideally answers should explain how they solve the asker's problem.Corniculate
Okay, I'll adjust as necessaryAyana
Usually that means some text outside the code block to give the big picture. Also, this would be a lot more readable if you indent the comments to a consistent column so it's easier to read just the instructions without getting a wall-of-text effect. (See the asm code blocks in my answer on this question for an example of formatting/style).Lesko
In 32-bit code loop uses ECX. Your code will break if the high bytes of ECX happen to be non-zero on entry to main because you'll loop 64k times! Just use ECX, or better don't use the slow loop instruction at all, and use cmp esi, fib + sizeof fib - 8 / jb L1. (i.e. do {} while(p < endp). Also note that after a loop iteration, ax has most recent Fib(n), so if you init AX before the loop you only need to reload the old one inside.Lesko

© 2022 - 2024 — McMap. All rights reserved.