It looks you need to narrow down more than just keywords/key phrases and find the subject and object per sentence.
For subject/object recognition, I recommend the Stanford Parser or the Google Language API, where you send a string and get a dependency tree response.
You can test the Google API first to see if it works well with your corpus: https://cloud.google.com/natural-language/
The outcome here is a subject predicate object (SPO) triplet, where your predicate describes the relationship. You'll need to traverse the dependency graph and write a script to parse out the triplet.
Other Packages:
I use NLTK, Spacy, and Textblob frequently. If the corpus is simple, generic, and straightforward, Spacy and Textblob work well OOTB. If the corpus is highly customized, domain-specific, messy (incorrect spelling or grammar), etc. I'll use NLTK and spend more time customizing my NLP text processing pipeline with scrubbing, lemmatizing, etc. You may want to add your own custom dictionary of technology related keywords and keyphrases so that your parser can catch these if you decide to go with one of these packages.
NLTK Tutorial: http://www.nltk.org/book/
Spacy Quickstart: https://spacy.io/usage/
Textblob Quickstart: http://textblob.readthedocs.io/en/dev/quickstart.html