What would be a good way to measure the size of a JSP project?
Asked Answered
B

1

3

Given an existing JSP project, I would like to get a feel for the complexity/size of the "view" portion of the project. Here's what I've done so far:

  • Pulled the list of JSP's that have been compiled from the production server within the last x months (that eliminates 'dead' jsps).
  • Wrote a quick scanner to find the JSP fragment files that are imported into the compiled pages.
  • Pulled the size of the file and the timestamp off the file system.

So now I have a list of pages and the fragments imported into those pages, along with their size, and the last time they were compiled and changed.

But what I really need to know is how complicated the page is; some of these pages have a whole lot of Java code on them. It's a big project, so to go through each page and size it would be tedious and probably not that accurate.

I was going to write another scanner that measured the code between <% and %>, but I wondered if there was some kind of metrics generator out there that could already do that. I would like it to output how "big" the page was and how "big" the code on the page was. The point is to segregate the small, medium, big, and huge pages, so the absolute measurement is less important than the relative.

EDIT: Wrote another scanner to count number of JavaScript lines, Java (Scriptlet) lines, HTML lines, and instances of taglib useage. So by using the results of the scanner, I have some parameters that would indicate 'complexity'. Not real clean, but it's ok for now.

Boorer answered 11/10, 2011 at 13:54 Comment(1)
I would be quite intersted in your "count those scriptlet lines for me" tool. Would you mind sharing this ?Hydrokinetics
M
1

So the issue is that you have smatterings of Java code interspersed with the html, so no standard metrics tool will work.

Not quite off the shelf, but our Source Code Search Engine might come pretty close. This is a tool for searching large code bases by indexing the source code using langauge-accurate lexical extraction. The relevance here is that it computes SLOC, comment counts, Halstead and Cyclomatic measures of the files it indexes, so you get metrics if you simply ignore the search feature. The metrics are generated to an XML file (with one "record" per source file) so you can do whatever further processing you want on them. See the metrics discussion on the linked web page.

While we do have a JSP lexer, it hasn't been tested with the Search Engine yet. We've built dozens of lexers so this should be pretty easy for us to do (and we'd be happy to do it). That would produce the answer you want directly.

If you didn't want to go down that path, you could follow through with your simple idea of extracting the code between <% and %>, dump it into files parallel to the original JSP files, and hand that code to the search engine through its (production) Java lexeme extractor for the Search Engine, and get your metrics that way. The lexers are very robust in the fact of malformed files, so the fact that the Java fragments extracted might not collectively be quite legal wont bother it a bit.

Meagher answered 12/10, 2011 at 3:38 Comment(2)
Source Code Search Engine looks pretty cool! I might like to employ that on the Java classes some day. What I need immediately is pretty informal, and I've got to get something done quickly, so unless I can find a tool that I can point at a list of JSP files and 'just run', I'll probably fall back to writing the quick and dirty scanner.Boorer
You could download it, and try the Ad Hoc lexeme extractor, which is designed to process "generic programmming languages" (and yet still produces metrics!). That would get you an answer really quickly.Meagher

© 2022 - 2024 — McMap. All rights reserved.