In contrast to stemming, lemmatization looks beyond word reduction, and considers a language's full vocabulary to apply a morphological analysis to words. The lemma of 'was' is 'be' and the lemma of 'mice' is 'mouse'. Further, the lemma of 'meeting' might be 'meet' or 'meeting' depending on its use in a sentence.

import spacy
nlp = spacy.load('en_core_web_sm')
doc1 = nlp(u"I am a runner running in a race because I love to run since I ran today")

for token in doc1:
    print(f'{token.text:<{10}}{token.pos_:<{10}}{token.lemma:<{25}}{token.lemma_:<{10}}')
I         PRON      4690420944186131903      I         
am        AUX       10382539506755952630     be        
a         DET       11901859001352538922     a         
runner    NOUN      12640964157389618806     runner    
running   VERB      12767647472892411841     run       
in        ADP       3002984154512732771      in        
a         DET       11901859001352538922     a         
race      NOUN      8048469955494714898      race      
because   SCONJ     16950148841647037698     because   
I         PRON      4690420944186131903      I         
love      VERB      3702023516439754181      love      
to        PART      3791531372978436496      to        
run       VERB      12767647472892411841     run       
since     SCONJ     10066841407251338481     since     
I         PRON      4690420944186131903      I         
ran       VERB      12767647472892411841     run       
today     NOUN      11042482332948150395     today     

In this case we see that running, run and ran have the same lemma (12767647472892411841)

 
def show_lemmas(text):
    for token in text:
        print(f'{token.text:{12}} {token.pos_:{6}} {token.lemma:<{22}} {token.lemma_}')
doc2 = nlp(u"I saw eighteen mice today!")

show_lemmas(doc2)
I            PRON   4690420944186131903    I
saw          VERB   11925638236994514241   see
eighteen     NUM    9609336664675087640    eighteen
mice         NOUN   1384165645700560590    mouse
today        NOUN   11042482332948150395   today
!            PUNCT  17494803046312582752   !
doc3 = nlp(u"I am meeting him tomorrow at the meeting.")

show_lemmas(doc3)
I            PRON   4690420944186131903    I
am           AUX    10382539506755952630   be
meeting      VERB   6880656908171229526    meet
him          PRON   1655312771067108281    he
tomorrow     NOUN   3573583789758258062    tomorrow
at           ADP    11667289587015813222   at
the          DET    7425985699627899538    the
meeting      NOUN   14798207169164081740   meeting
.            PUNCT  12646065887601541794   .

Here we see how meeting is correctly tagged as a noun and a verb

doc4 = nlp(u"That's an enormous automobile")

show_lemmas(doc4)
That         PRON   4380130941430378203    that
's           AUX    10382539506755952630   be
an           DET    15099054000809333061   an
enormous     ADJ    17917224542039855524   enormous
automobile   NOUN   7211811266693931283    automobile

Note that lemmatization does *not* reduce words to their most basic synonym - that is, `enormous` doesn't become `big` and `automobile` doesn't become `car`.