Skip to end of metadata
Go to start of metadata
You are viewing an old version of this page. View the current version.
Compare with Current
View Page History
Version 1
Next »
- How do you decide whether to reuse existing versus rebuild new?
- Scope of the software (campus-wide, departmental, etc.)
- How widely is it used?
- Is it easily usable by the end-user?
- Many clicks/deeply buried content
- Is there just one interface to access the data, or multiple interfaces? (eg. multiple ERP systems on campus, etc.)
- Building new can simplify the user experience
- Do you have enough staff to maintain code changes down the road?
- Does the skin/navigation match with marketing initiatives?
- Do your users need the full gamut of features, or just a small subset?
- If built well (api calls, etc.) you can decouple the interface from the data source (it becomes platform-agnostic, allowing changes between software packages, versions, etc. without affecting the end-users or making them relearn.)
- How does the application fit into the big picture? ("no application is an island...")
- Does the provided functionality meet your needs, or do you have additional requirements?
- Can you do a dashboard with deep links into the existing interface rather than rebuilding it entirely?
- Is there support for the existing way of doing things, or is it outdated and unsupported?
- Potential issues with upgrades of the back-end system.
- Data integrity issues?
- Sharing code to simplify things?
- Platform agnostic?
- What about special features provided by db manufacturer, etc.?
- Provide basic functionality for all, but extra features for specific vendors?
- Ending up in the Microsoft problem - a million features that are used by different people, but you only need ten
- How do you get the data from the source?
- Institution-specific software hooks
- Parameterize settings that can differ between institutions (eg. thresholds for date ranges, etc.)
- Adds extra complexity because of need to know JSR-168, etc.
- The more you bring into the campus portal, the higher risk of outages there is (if the portal goes down, you can't access the other systems even if they're still up.)
- Single point of failure? Almost inevitable. But making it as small of a footprint as possible, and as easily resolved as possible.
- "proxy portlets" and "mini apps" - writing like a mini version of the portal for different apps, where the apps sit on the servers themselves, but connected through to the portal itself through a proxy connection, so even if the portal goes down, you can still access that interface from the server itself
- Data validation, transaction verification, etc.