Would be safe to provide a very crude difference in uses for those who are not familiar?
The semantic web standards become very important when you start dealing in a domain that has a lot of data, and it has a lot of data of different types, esp. when those types are something other than integers, floating point/real values, simple enumeration/categories, Boolean, or non-semantic string/text (i.e. text where you might need to index it, search it for patterns, but where you don't need to actually understand what it means in order to processes it).
Domains where this has proven to be critically useful include things like many of the natural sciences (esp. biology, geosciences, climate modeling/meteorology), intelligence services, military operations (e.g. the US Dept of Defense, except for the healthcare part, was an early adopter of RDF--the healthcare part has had to content with other healthcare specific standards that have finally caught up), library services, natural language processing,
OWL is used mostly to express an ontology: a formal representation of concepts in a domain, their properties, and their relationships to other concepts. It is how you define things and behavior in a computable fashion. OWL ontologies are published as RDF graphs (and there are other iso-semantic representations possible)
RDF is a graph expression language. It is made up of Subject-Object-Predicate tuples. RDF, with or without OWL, is often used to express structured data beyond the simple Rows x Columns that something like a CSV file would. It is often used in the same way that an XML-based grammar or some specific use of JSON would be, but uses semantic web standards so that unlike XML or JSON, you can understand RDF w/o having to know the application it came from (i.e. it is self-describing in a way that XML grammars or JSON uses are not).
RDF-Schema is a way to constrain RDF such that you can have the same sort of validation and model consistency that you would with an XML schema language (e.g. XML Schema--XSD, or DTDs, or RELAX-NG or Schematron--although Schematron approaches, but not quite, reaches the level of constraint you can do with OWL, although you do need to do a bit of hacking to deal with controlled vocabularies in Schematron that are a chip-shot in OWL, e.g. figuring out if "Lobster" fits a Schematron definition of "Seafood" likely means you have to iterate through an XML fragment that enumerates all seafood types while OWL you just ask if Lobster isA Seafood.)
It is a little confusing since there is an XML grammar for RDF (and other, non-XML ways too to represent that same sort of Subject-Object-Predicate tuple).
JSON-LD is a standard for using JSON pretty much the same way you would use RDF (i.e. to represet interoperable, linked, structured data). You can also use other ways (e.g. a JSON Schema) to say the same thing as you might with RDF.
For example, the HL7 FHIR standard for exchange of health information has an underlying reference model (currently in UML2, but if you wanted to do it with OWL and RDF-Schema you could--in fact I am sure that several people have), and uses a lot of controlled vocabularies (all of which can be represented in OWL, not that they all are published that way, but again, if you look, I am sure you can find someone who has).
The web services for FHIR can be implemented with a HL7 specified JSON Schema or XML Schema, and with RDF. In all of these cases there is a lot validation and conformance checking required to address the inherent complexity of the data (since it has to represent everything from someone's current, temporary, address, which is good until next week, then they are going to their regular home of record, before going to their vacation home two weeks from now, their DNA sequences with interpretation of what the various variations mean (what gene they are in, what the change in function might be, what is their risk of disease), all the various signs, symptoms, and history of every illness they have had, or might have, or currently have, details of how their next surgery will be performed (and how that relates to different things seen on different diagnostic tests/images), their complete employment history (including details of occupational exposure and risks that might entail), what sort of sports their dog enjoys, what they had on their pizza (each one, for the last few years and the rest of their life--since they have an allergy to some food which you need to track, evaluate risk for recurrence and/or just need a detailed dietary history), their response to counseling for marital problems, etc. etc.
You wouldn't want to have to try and come up with your own enumeration for vocabulary in healthcare either (seafood is just one of the types of food that are just a small part of the total vocabulary needs--30K different disease, similar number of drugs, maybe 100-fold as many symptoms and various body parts, genes, proteins, etc.) so having the formal semantics and combination of very specific, and very flexible, information models is needed to have computable clinical data.
(We still are not 100% there yet, but we are closer today than when the first electronic health record systems went live in the late 1960s/early 1970s)
You get the idea and can see why such a hyper-structured/hyper-organized approach to computable semantics and generalizable interoperability external to the applications which create, store, analyze, and report on the data. If you wanted to design a program to deal with health information you would either have to create a sophisticated model yourself, and figure out how to get data into your application, or be able to use computable models to import, transform, and/or analyze data (which is not trivial, but a lot easier than trying to build all this domain knowledge into every little application, esp. since its hard to say how the data will be needed in the future just based on past uses or patterns of use.
The biggest strength of using OWL, and RDF-Schema, RDF (or the equivalent using JSON-LD, or even having iso-semantic XML that uses a standard grammar) is that you can separate the meaning of data, which also requires a detailed record of the context in which it occurred (often including what test method was used to measure something, what standard it was compared to, what the patient was doing at the time, what medications/disease/procedures, etc.).
For example, it would seem easy to to write a program that figured out if someone had high blood pressure--but only if know that their blood pressure was measured with the correct BP cuff size for their arm, in the middle of their arm, with the arm supported and the cuff at approximately the same height as their heart, with them sitting upright, feet on the floor, legs not crossed, when they are relaxed and comfortable, and not in acute pain, anxiety, or acutely ill. You actually need a set of these measurements, and you need to know the pressure at two different phases of the heart contraction (basically the high and low values of the pressure wave, or you can measure the pressure 150-300 times a second and use that to calculate the mean pressure), and of course, units of measurement are really important. So, if all of those are true, then that specific BP measurement can be used as part of the data set to see if they meet the diagnostic criteria (and you do need to be specific about who's criteria you are using and which edition).
If the patient is laying flat, or standing, or angry, or just donated blood or any other of a few hundred current or recent set of conditions, then you cannot use those values for that application.
The time and effort to learn OWL and RDF are a lot less than having to try design one new disease specific application from scratch.