[04:58:24 CDT(-0500)] <esauro> high guys
[04:58:30 CDT(-0500)] <esauro> this is my first time here
[04:58:50 CDT(-0500)] <esauro> is this a channer for developers, for deployers, for both?
[05:55:08 CDT(-0500)] <esauro> I meant 'hi guys'
[07:34:10 CDT(-0500)] <foxnesn> hi
[07:34:14 CDT(-0500)] <foxnesn> a channel for anyone
[07:34:22 CDT(-0500)] <foxnesn> i am here for help usually lol
[07:53:39 CDT(-0500)] <foxnesn> uh, did something happen to the repos overnight?
[08:44:59 CDT(-0500)] <foxnesn> maven repos down?
[08:45:26 CDT(-0500)] <atilling> Fine for me
[08:45:45 CDT(-0500)] <foxnesn> [INFO] Scanning for projects...
[08:45:45 CDT(-0500)] <foxnesn> [INFO]
[08:45:45 CDT(-0500)] <foxnesn> [INFO] ------------------------------------------------------------------------
[08:45:48 CDT(-0500)] <foxnesn> [INFO] Building local-cas 1.0-SNAPSHOT
[08:45:51 CDT(-0500)] <foxnesn> [INFO] ------------------------------------------------------------------------
[08:45:54 CDT(-0500)] <foxnesn> Downloading: http://oss.sonatype.org/content/repositories/releases/org/opensaml/opensaml/1.1b/opensaml-1.1b.pom
[08:45:57 CDT(-0500)] <foxnesn> [INFO] ------------------------------------------------------------------------
[08:46:00 CDT(-0500)] <foxnesn> [INFO] BUILD FAILURE
[08:46:02 CDT(-0500)] <foxnesn> [INFO] ------------------------------------------------------------------------
[08:46:05 CDT(-0500)] <foxnesn> [INFO] Total time: 23.584s
[08:46:08 CDT(-0500)] <foxnesn> worked fine wednesday
[08:46:37 CDT(-0500)] <foxnesn> cant even hit http://oss.sonatype.org/content/repositories/releases/ manually
[08:47:44 CDT(-0500)] <atilling> Ahh yeah, sorry for some reason I was think you were asking about the github source repo
[08:47:58 CDT(-0500)] <atilling> does seem that sonatype.org is down
[08:48:22 CDT(-0500)] <foxnesn> fail
[08:48:36 CDT(-0500)] <foxnesn> guess i will have to wait
[09:40:47 CDT(-0500)] <foxnesn> still down...
[10:21:32 CDT(-0500)] <RaviJK> serac hi
[10:21:36 CDT(-0500)] <serac> hi
[10:21:51 CDT(-0500)] <RaviJK> i have a requirement wanted to ask for your opinion
[10:22:07 CDT(-0500)] <RaviJK> we have cas running at <youdomain.com/cas
[10:22:15 CDT(-0500)] <RaviJK> so the cookies have path set to /cas by default
[10:22:36 CDT(-0500)] <RaviJK> i changed the pat from /cas to / in ticketGrantingTicketCookieGenerator.xml
[10:22:54 CDT(-0500)] <RaviJK> however the cookies being generated still have /cas as cookiepath
[10:22:58 CDT(-0500)] <RaviJK> is that by design ?
[10:23:09 CDT(-0500)] <RaviJK> noelsharpe ping
[10:23:42 CDT(-0500)] <serac> I'd expect that change to make the cookie path /.
[10:23:54 CDT(-0500)] <serac> Never tested it though.
[10:24:00 CDT(-0500)] <RaviJK> i can't seem to get that working..
[10:24:41 CDT(-0500)] <serac> The only think I can think of is a deployment problem where the file you changed didn't get applied at deployment time.
[10:25:29 CDT(-0500)] <RaviJK> hmm, when i change the cookie name from CASTGC to to something else, it does pick up the new name
[10:26:21 CDT(-0500)] <RaviJK> but the cookie path always is set to /cas by default .. was wondering if this is because the context of application is under /cas within the domain
[10:26:53 CDT(-0500)] <serac> That's correct. See http://jasig.275507.n4.nabble.com/Change-Cookie-Path-and-Domain-for-CAS-Tomcat-td3077573.html/
[10:27:13 CDT(-0500)] <serac> That thread's recent enough to confirm what you're doing ought to work.
[10:27:28 CDT(-0500)] <RaviJK> not found
[10:27:39 CDT(-0500)] <serac> Remove / – typo
[10:28:35 CDT(-0500)] <foxnesn> is mvn completely down?
[10:31:41 CDT(-0500)] <RaviJK> serac, i had seen that link, is there any other place (that you are aware of) within the system that controls cookiepath
[10:31:51 CDT(-0500)] <serac> not afaik
[10:32:07 CDT(-0500)] <serac> foxnesn: jasig maven or maven central?
[10:32:15 CDT(-0500)] <foxnesn> maven central i believe
[10:32:24 CDT(-0500)] <foxnesn> mvn clean package fails to build, cant contact server
[10:32:42 CDT(-0500)] <serac> I can get to the search interface, http://search.maven.org/#search|ga|1|cas-server.
[10:33:21 CDT(-0500)] <foxnesn> same
[10:33:33 CDT(-0500)] <foxnesn> but build fails when it attempts to download dependencies
[10:33:42 CDT(-0500)] <foxnesn> [tomcat@dknauth1dev local-cas]$ mvn clean package
[10:33:42 CDT(-0500)] <foxnesn> [INFO] Scanning for projects...
[10:33:42 CDT(-0500)] <foxnesn> [INFO]
[10:33:42 CDT(-0500)] <foxnesn> [INFO] ------------------------------------------------------------------------
[10:33:45 CDT(-0500)] <foxnesn> [INFO] Building local-cas 1.0-SNAPSHOT
[10:33:48 CDT(-0500)] <foxnesn> [INFO] ------------------------------------------------------------------------
[10:33:51 CDT(-0500)] <foxnesn> Downloading: http://oss.sonatype.org/content/repositories/releases/org/opensaml/opensaml/1.1b/opensaml-1.1b.pom
[10:33:54 CDT(-0500)] <foxnesn> [INFO] ------------------------------------------------------------------------
[10:33:57 CDT(-0500)] <foxnesn> [INFO] BUILD FAILURE
[10:33:59 CDT(-0500)] <foxnesn> [INFO] ------------------------------------------------------------------------
[10:34:02 CDT(-0500)] <foxnesn> [INFO] Total time: 24.789s
[10:34:05 CDT(-0500)] <foxnesn> [ERROR] Failed to execute goal on project local-cas: Could not resolve dependencies
[10:34:52 CDT(-0500)] <foxnesn> same pom.xml worked fine on wednesday
[10:36:55 CDT(-0500)] <foxnesn> im wondering if anyone could verify this for me
[10:37:03 CDT(-0500)] <foxnesn> so i dont think im craxzy
[10:37:06 CDT(-0500)] <foxnesn> crazy even
[10:39:27 CDT(-0500)] <serac> Can't resolve that URL from here. oss.sontatype.org looks firewalled.
[10:41:01 CDT(-0500)] <serac> Do you have that artifact installed locally?
[10:41:08 CDT(-0500)] <serac> (would imagine yes)
[10:41:48 CDT(-0500)] <foxnesn> well everything was built on wednesday
[10:41:52 CDT(-0500)] <foxnesn> that is in the pom now
[10:41:56 CDT(-0500)] <foxnesn> so i guess yes?
[10:42:23 CDT(-0500)] <serac> Check for it under ~/.m2/repository.
[10:44:27 CDT(-0500)] <foxnesn> hrm not built locally
[10:44:44 CDT(-0500)] <foxnesn> but my pom dependencies are only webapp, support-ldap and clearpass
[10:47:12 CDT(-0500)] <serac> So you don't have the artifact locally?
[10:47:58 CDT(-0500)] <foxnesn> no :/
[10:48:20 CDT(-0500)] <foxnesn> i guess i will have to wait
[10:48:27 CDT(-0500)] <foxnesn> or download it manually
[10:48:42 CDT(-0500)] <serac> Just verified it still attempts to get the artifact when it's available locally.
[10:49:11 CDT(-0500)] <serac> My overlay isn't building.
[10:49:49 CDT(-0500)] <foxnesn> ok so im not crazy, good
[10:50:13 CDT(-0500)] <serac> Here's what you do:
[10:50:22 CDT(-0500)] <serac> 1. Get the artifact from somewhere else.
[10:50:28 CDT(-0500)] <serac> 2. Build with mvn -o option.
[10:50:32 CDT(-0500)] <serac> -o == offline
[10:50:54 CDT(-0500)] <serac> Ping me privately if you want it from me.
[11:33:16 CDT(-0500)] <apetro_> (I have a schedule conflict for 2pm Eastern today — will be on phone helping with a cas-server installation, actually.)
[11:55:10 CDT(-0500)] <foxnesn> wow ok, maven repos still down so im gonna have to get them manually
[12:25:45 CDT(-0500)] <foxnesn> yay back up
[12:42:29 CDT(-0500)] <apetro_> foxnesn , yes , https://oss.sonatype.org/ had an outage but should be operating normally now.
[12:42:43 CDT(-0500)] <apetro_> some kind of SAN hardware event, apparently.
[12:43:11 CDT(-0500)] <serac> apetro: just wanted to mention I'm fully open to considering the service manager authorization change.
[12:44:35 CDT(-0500)] <serac> I just want to be able to easily review changes on CASImpl, but the diffs were much bigger than needed due to the whitespace issue, so I gave up.
[12:44:41 CDT(-0500)] <apetro_> lovely. I'd prefer to address new and interesting support cases rather than services registry lockouts.
[12:44:53 CDT(-0500)] <serac> Totally agree.
[12:45:01 CDT(-0500)] <serac> I think your reasoning for the change is sound as well.
[12:45:27 CDT(-0500)] <serac> It's part of CAS, so should be authorized by default.
[12:45:28 CDT(-0500)] <apetro_> any hints on how I fix that commit, or do I blow away that branch, create a new one, and commit more carefully?
[12:46:07 CDT(-0500)] <serac> I have no qualms deleting history from my local repo, but I realize that may be a cavalier attitude.
[12:46:52 CDT(-0500)] <apetro_> hmm. I sense a date with Pro Git in my near future.
[12:46:55 CDT(-0500)] <serac> I'd do what feels easiest.
[12:46:57 CDT(-0500)] <serac> haha
[12:56:44 CDT(-0500)] <Ozy_work2> afternoon, gentlemen.
[12:56:54 CDT(-0500)] <wgthom> howdy
[13:01:27 CDT(-0500)] <serac> hello
[13:01:40 CDT(-0500)] <apetro_> checking out: distracted working on a CAS project at this time, sorry.
[13:01:47 CDT(-0500)] <serac> later
[13:01:57 CDT(-0500)] <serac> Agenda items?
[13:02:40 CDT(-0500)] <wgthom> nothing from me…excited about the unconference
[13:03:46 CDT(-0500)] <serac> Anyone have reaction to the cas-user thread about healthcheck functionality, my proposal specifically?
[13:04:53 CDT(-0500)] <wgthom> I'm a lil' behind on the lists…been traveling this week.
[13:04:56 CDT(-0500)] <apetro_> haven't read the healthcheck thread
[13:05:12 CDT(-0500)] <serac> Just wanted to bring to your attention if nothing else.
[13:05:16 CDT(-0500)] <wgthom> could kick it around here though if you up for it
[13:05:19 CDT(-0500)] <serac> We want/need this functionality.
[13:05:27 CDT(-0500)] <serac> So we're gonna do it and contribute back.
[13:05:34 CDT(-0500)] <wgthom> "We" = VT?
[13:05:38 CDT(-0500)] <serac> Yes.
[13:05:55 CDT(-0500)] <serac> Just want to make something that's useful to others.
[13:06:11 CDT(-0500)] <wgthom> reading thread now...
[13:06:13 CDT(-0500)] <serac> Happy to kick around here.
[13:06:13 CDT(-0500)] <apetro_> I think there's a lot of value in better in-built health monitoring.
[13:06:47 CDT(-0500)] <serac> The specific use case is enterprise monitoring. In our case it's the load balancer health check probes that matter. And those are very dumb.
[13:07:41 CDT(-0500)] <serac> So it has to be simple, i.e. http response codes have to be part of the solution.
[13:08:02 CDT(-0500)] <serac> But I realize there may be more advanced monitoring solutions in place we'd need to consider. And that's where you folks come in
[13:10:33 CDT(-0500)] <wgthom> sounds like a reasonable proposal and something that others will benefit from.
[13:11:52 CDT(-0500)] <serac> There's only one concern I had. The response codes would overloaded in a way that's not strictly compliant with the HTTP spec.
[13:12:48 CDT(-0500)] <wgthom> being compliant with http would be a good thing I imagine. does your local needs require a solution that breaks http?
[13:13:06 CDT(-0500)] <serac> Well, I thought indicating the number of failed checks in the code would be useful.
[13:13:20 CDT(-0500)] <wgthom> could do that in the message body, no?
[13:13:45 CDT(-0500)] <serac> I'm not sure our LB can examine anything but response code.
[13:14:27 CDT(-0500)] <wgthom> ah
[13:14:49 CDT(-0500)] <wgthom> recent engagements folks are looking for two things.
[13:15:21 CDT(-0500)] <wgthom> healthcheck URL for LB and cas app monitoring via nagios
[13:15:50 CDT(-0500)] <serac> This is more for the former, although it could potentially expose enough data for nagios as well.
[13:16:00 CDT(-0500)] <serac> Our LB admin says we can do content matching.
[13:16:00 CDT(-0500)] <wgthom> not sure how sophisticated the healthcheck url has to be to be useful.
[13:16:57 CDT(-0500)] <serac> I don't think we even know if we'd want to have node failure on slowness, but seems good functionality to build in.
[13:17:00 CDT(-0500)] <wgthom> the other components (db, ldap, os) have been monitored independantly.
[13:17:24 CDT(-0500)] <serac> CAS's view of those resources is the only thing that matters.
[13:17:44 CDT(-0500)] <serac> I've seen connections in a pool go belly up while external connections to same resource are fine.
[13:17:59 CDT(-0500)] <wgthom> yep. good point
[13:18:13 CDT(-0500)] <serac> Been meaning to write this up, but not there yet. It's a particular issue with PooledContextSource.
[13:18:43 CDT(-0500)] <serac> Anyway, looks like overloading the http response codes may not be necessary if most folks can do header/content matching.
[13:18:54 CDT(-0500)] <apetro_> the other components can be monitored independently, and should, I agree, but I've worked support cases where they weren't, and my life would have been better had CAS gone ahead and been more self-aware about which of its dependencies was at fault.
[13:18:55 CDT(-0500)] <wgthom> col
[13:18:57 CDT(-0500)] <wgthom> cool
[13:19:26 CDT(-0500)] <apetro_> as in, LDAP becomes glacially slow.
[13:19:36 CDT(-0500)] <serac> Yup.
[13:19:44 CDT(-0500)] <serac> Glacially slow == unavailable to user.
[13:19:50 CDT(-0500)] <serac> == unavailable
[13:20:45 CDT(-0500)] <serac> I've really come to like using custom X- headers (ala REST) for communicating errors in service-to-service use cases, which this really is.
[13:20:47 CDT(-0500)] <wgthom> ah. I do have an agenda item….more and FYI I suppose
[13:21:02 CDT(-0500)] <serac> Any opinions on use of headers vs plain text in body vs xml?
[13:21:24 CDT(-0500)] <serac> headers for tooling, plain text in body for humans?
[13:24:26 CDT(-0500)] <wgthom> I'd try to the solve the problem locally as simply as possible first, and then look for more feedback form the community. No sure I have firm opinion quite yet regarding your specific questions, though my immediate reaction is to have the error communicated in one place.
[13:25:46 CDT(-0500)] <wgthom> we've working on a multi-data center active/active architecture for lamar
[13:26:28 CDT(-0500)] <brandon_> hello
[13:26:29 CDT(-0500)] <wgthom> the solutions relies on a little fancy routing at the LTMs based on a cluster identifier appended to the ST Ids
[13:26:39 CDT(-0500)] <wgthom> all is working great.
[13:27:10 CDT(-0500)] <brandon_> is there any way to have example.edu/cas/services automatically redirect to example.edu/cas/services/manage.html?
[13:27:17 CDT(-0500)] <serac> The ticket suffix makes use of oob capability?
[13:27:44 CDT(-0500)] <wgthom> yes.
[13:27:50 CDT(-0500)] <apetro_> brandon_, I thought it did, so long as you don't have a session?
[13:28:09 CDT(-0500)] <wgthom> but then one of the clients decided to use the Saml11AuthenticationFilter
[13:28:21 CDT(-0500)] <serac> And....
[13:28:44 CDT(-0500)] <serac> Ah, those are saml artifact handle identifiers without suffix.
[13:28:46 CDT(-0500)] <wgthom> which sources the SAMLArt (aka ST ID) from a different algorithm,
[13:28:49 CDT(-0500)] <wgthom> yes
[13:29:05 CDT(-0500)] <serac> Don't see a way around that.
[13:29:12 CDT(-0500)] <serac> SAML identifiers are same identifiers.
[13:29:19 CDT(-0500)] <brandon_> aperto_: well right now i go to /cas/services and it redirects me to login, i login and when i try to go to /cas/services it redirects me to /cas/login
[13:29:22 CDT(-0500)] <wgthom> well….fall back to CASAuthNFilter...
[13:29:42 CDT(-0500)] <serac> Don't see how that helps.
[13:30:25 CDT(-0500)] <foxnesn> is it normal for me to have to download commons-pool-1.5.2.jar to get pooling to work? thought it was odd that it wasnt mentioned in the wiki
[13:30:38 CDT(-0500)] <wgthom> CASAuthNFilter will result in a CAS ST rather then the SAMLArt
[13:31:45 CDT(-0500)] <serac> And the client can get a SAML AttributeQuery with a CAS ST ticket?
[13:31:52 CDT(-0500)] <wgthom> right
[13:31:56 CDT(-0500)] <serac> Boo!
[13:32:07 CDT(-0500)] <serac> I mean good for Lamar.
[13:32:13 CDT(-0500)] <serac> But bad for our SAML implementation.
[13:32:28 CDT(-0500)] <wgthom> well, I'd rather have the attr in the normal CAS payload…but that's another discussion.
[13:32:29 CDT(-0500)] <wgthom>
[13:32:46 CDT(-0500)] <serac> s/discussion/flamewar/
[13:32:47 CDT(-0500)] <serac>
[13:33:09 CDT(-0500)] <serac> This rings a bell tho....
[13:33:22 CDT(-0500)] <serac> There's been discussion on shib-users about stateless clustering.
[13:33:25 CDT(-0500)] <wgthom> anyway…just FYI.
[13:33:45 CDT(-0500)] <serac> Scott Cantor has done some work for it; wonder if it would be applicable to our SAML impl.
[13:34:05 CDT(-0500)] <serac> (I'm pretty sure it's specific to the Shib SP, but might be some ideas in there of use.)
[13:39:39 CDT(-0500)] <apetro_> back
[14:06:33 CDT(-0500)] <foxnesn> do yall use a hardware load balancer for your CAS HA ?
[14:06:38 CDT(-0500)] <serac> yes
[14:06:48 CDT(-0500)] <foxnesn> i may attempt to do it in software
[14:06:59 CDT(-0500)] <serac> How so?
[14:07:14 CDT(-0500)] <foxnesn> doesnt apache home a module somewhere that does it?
[14:07:51 CDT(-0500)] <serac> mod_proxy_balancer I is a poor choice for a poor man's load balancer
[14:07:56 CDT(-0500)] <foxnesn> lol
[14:08:15 CDT(-0500)] <atilling> I've done tomcat loadbalancing with JK_mod
[14:08:17 CDT(-0500)] <foxnesn> im open to suggestions before i tell my boss we need more money
[14:08:18 CDT(-0500)] <serac> Please consider ipvsadm+keepalived.
[14:08:48 CDT(-0500)] <wgthom> are deploying to a VM or dedicated hardware?
[14:09:01 CDT(-0500)] <serac> That combination is really a linux/software implementation of dedicated LB hardware.
[14:10:12 CDT(-0500)] <wgthom> fox: if you are deploying to a VM, you may have enough availability built in already
[14:10:27 CDT(-0500)] <foxnesn> undecided
[14:10:30 CDT(-0500)] <wgthom> without the need for multi-node cluster
[14:10:38 CDT(-0500)] <foxnesn> we really just want failover
[14:10:46 CDT(-0500)] <foxnesn> but i understand good failover means load balancing
[14:10:50 CDT(-0500)] <wgthom> failover of what?
[14:10:54 CDT(-0500)] <foxnesn> CAS
[14:10:59 CDT(-0500)] <wgthom> of what component?
[14:11:13 CDT(-0500)] <foxnesn> component?
[14:11:18 CDT(-0500)] <foxnesn> sorry confused
[14:11:18 CDT(-0500)] <wgthom> hardware? use a VM cluster if you have one and stick with one node cas.
[14:11:41 CDT(-0500)] <serac> What happens when the one cas node goes down?
[14:11:52 CDT(-0500)] <foxnesn> i was thinking 1 cas per VM
[14:12:05 CDT(-0500)] <foxnesn> then a VM with a load balancer
[14:12:11 CDT(-0500)] <foxnesn> but that seems silly to me
[14:12:30 CDT(-0500)] <serac> What part of that is silly?
[14:12:37 CDT(-0500)] <foxnesn> well it is a VM
[14:12:44 CDT(-0500)] <foxnesn> most likely the entire server is going to fail
[14:12:50 CDT(-0500)] <foxnesn> taking down both nodes and the load balancer
[14:12:51 CDT(-0500)] <serac> Hardly.
[14:13:00 CDT(-0500)] <foxnesn> no?
[14:13:05 CDT(-0500)] <serac> Tomcat process on one VM goes OOM.
[14:13:20 CDT(-0500)] <serac> VM host is fine, CAS on that node is dead.
[14:13:23 CDT(-0500)] <serac> OOM happens.
[14:13:35 CDT(-0500)] <serac> But certainly not the only kind of software failure mode.
[14:13:39 CDT(-0500)] <foxnesn> hrm...
[14:13:44 CDT(-0500)] <foxnesn> i see
[14:13:53 CDT(-0500)] <wgthom> sure, but it's a quick reboot. cas software hardily ever wedges.
[14:13:53 CDT(-0500)] <atilling> You only have one host for your VM?
[14:14:02 CDT(-0500)] <serac> A true HA setup has multiple redundant nodes; either active/active or active/passive.
[14:14:14 CDT(-0500)] <serac> I'd argue active/passive is a failover config.
[14:14:20 CDT(-0500)] <serac> active/active is true HA
[14:14:26 CDT(-0500)] <serac> Let the flaming begin
[14:14:31 CDT(-0500)] <foxnesn> prolly go active/passive
[14:14:48 CDT(-0500)] <serac> You're wasting 50% of your "hardware."
[14:15:04 CDT(-0500)] <serac> (but maybe saving on energy; hard to say)
[14:16:02 CDT(-0500)] <serac> wgthom: dunno about the VM environment you're describing
[14:16:09 CDT(-0500)] <atilling> We have our 3 CAS servers as VM's acrossed at min 2 hosts, those two hosts are in two different datacenters, in two different buildings
[14:16:12 CDT(-0500)] <serac> We don't control the platform here; only apps running as our user.
[14:17:44 CDT(-0500)] <wgthom> as VM cluster with hardware redundancy built it.
[14:17:49 CDT(-0500)] <wgthom> build in...
[14:18:17 CDT(-0500)] <foxnesn> would it make sense to setup a jpaticketreg on a database then going failover or HA?
[14:18:35 CDT(-0500)] <foxnesn> and replicate the databases
[14:18:36 CDT(-0500)] <atilling> cluster of 3 VM hosts in one datacenter, cluster of 2 vm hosts in the other. CAS VM's wander between hosts as vsphere load dictates but always 2 cas vm's in one center and 1 in the other
[14:18:44 CDT(-0500)] <wgthom> fox: depends on your requirements for availablity and recovery
[14:19:57 CDT(-0500)] <wgthom> if you can satisfy your reqs via a single node, you'd have a much simpler system.
[14:20:04 CDT(-0500)] <foxnesn> i agree
[14:20:26 CDT(-0500)] <foxnesn> but i was told HA is a goal
[14:21:44 CDT(-0500)] <foxnesn> clustering tomcat requires some sort of balancer i assume
[14:22:15 CDT(-0500)] <wgthom> HA without definition doesn't mean much…
[14:22:24 CDT(-0500)] <foxnesn> yea
[14:22:27 CDT(-0500)] <foxnesn> tell me about it
[14:22:38 CDT(-0500)] <foxnesn> im still trying to decrypt the final goal
[14:22:50 CDT(-0500)] <serac> You can do a lot with just a general understanding of that term tho.
[14:22:53 CDT(-0500)] <atilling> Bill is right a Single VM can be as reliabe as a cluster as long as your VM hosts are a cluster
[14:23:08 CDT(-0500)] <serac> reliable but not as available
[14:23:27 CDT(-0500)] <atilling> Also depends on if you need HA or HA/HT
[14:23:28 CDT(-0500)] <foxnesn> we want to avoid having users sign in again if a node fails
[14:23:43 CDT(-0500)] <foxnesn> that's really all
[14:24:07 CDT(-0500)] <serac> avoiding reauthentication is a much higher availability goal
[14:24:21 CDT(-0500)] <foxnesn> yes
[14:24:35 CDT(-0500)] <serac> Strictly speaking it can be really hard.
[14:24:38 CDT(-0500)] <wgthom> agree with serac: mostly folks just want to make sure they get to their app. meaning CAS is available.
[14:24:46 CDT(-0500)] <atilling> The VM comment holds there, if your VM hosts are clustered a single node is HA - but not HT
[14:24:55 CDT(-0500)] <serac> For example, if a node fails during the login webflow, like just after posting your auth data, what happens?
[14:32:52 CDT(-0500)] <foxnesn> well
[14:33:02 CDT(-0500)] <foxnesn> i have 2 vm hosts eacho with one cas instance
[14:33:09 CDT(-0500)] <foxnesn> if 1 fails i want the other to step in
[14:33:34 CDT(-0500)] <foxnesn> i will need some mechanism to control that
[14:33:54 CDT(-0500)] <foxnesn> since our 6-7 cas clients have to be pointed to just 1 cas server at a time
[14:33:54 CDT(-0500)] <serac> That's where ipvsadm+keepalived comes in, keepalived in particular.
[14:34:49 CDT(-0500)] <foxnesn> ill have to set that up over the weekend to look at it
[14:34:50 CDT(-0500)] <serac> The clients point to an IP on the LB, and it transparently forwards connections to a real node.
[14:35:00 CDT(-0500)] <serac> keepalived keeps track of what reals are available
[14:35:07 CDT(-0500)] <foxnesn> yea
[14:35:09 CDT(-0500)] <serac> That's a critical piece of functionality.
[14:35:21 CDT(-0500)] <foxnesn> generally we would leave this to our server guys
[14:35:32 CDT(-0500)] <foxnesn> not sure what im doing with it at this point heh
[14:35:45 CDT(-0500)] <atilling> apache httpd with jk_mod can do that
[14:36:02 CDT(-0500)] <serac> keepalived has a lot more flexible health checking options
[14:36:48 CDT(-0500)] <atilling> I agree - just apache jk_mod is easy to do and is enough for most purposes
[14:37:02 CDT(-0500)] <serac> Good point.
[14:37:17 CDT(-0500)] <atilling> Here we have redundant Cisco ACE 4710's - more then we need
[14:37:21 CDT(-0500)] <serac> foxnesn: if you have Apache skills, Apache options may be good enough.
[14:38:09 CDT(-0500)] <atilling> the other part I like about jk_mod is your getting all the peices from a single source
[14:38:14 CDT(-0500)] <foxnesn> keepalived will require kernel recompiling and suck
[14:38:23 CDT(-0500)] <foxnesn> such*
[14:38:37 CDT(-0500)] <atilling> apache httpd, apache tomcat and apache tomcat connector
[14:38:38 CDT(-0500)] <serac> I just installed my distro's packages and I was good.
[14:38:46 CDT(-0500)] <foxnesn> what distro?
[14:38:59 CDT(-0500)] <foxnesn> im on centos 6
[14:39:05 CDT(-0500)] <foxnesn> so redhat
[14:39:10 CDT(-0500)] <serac> iirc ubuntu 10.04
[14:39:13 CDT(-0500)] <serac> server
[14:39:59 CDT(-0500)] <foxnesn> yea i see a debian package
[14:40:11 CDT(-0500)] <foxnesn> i love ubuntu server btw, but aparently we have settled on centos
[14:40:24 CDT(-0500)] <foxnesn> since our familiarity with red hat
[14:40:35 CDT(-0500)] <serac> It's the same deal here b/w Debian.
[14:41:56 CDT(-0500)] <foxnesn> so with jk_mod i will need another VM with apache httpd installed on it with the tomcat connector
[14:42:57 CDT(-0500)] <atilling> that's the way I would handle it if I needed a software solution
[14:43:14 CDT(-0500)] <foxnesn> then on the cas nodes i would have to setup tomcat for clustering?
[14:43:44 CDT(-0500)] <atilling> yup and then in the CAS app setup an HA registry
[14:43:59 CDT(-0500)] <foxnesn> jpaticketreg for HA then?
[14:45:09 CDT(-0500)] <atilling> sure - I like ehcache but what ever works best in your enviro
[14:45:22 CDT(-0500)] <foxnesn> easier works best lol
[14:45:39 CDT(-0500)] <foxnesn> unleast until i get a handle on it
[14:46:54 CDT(-0500)] <foxnesn> each cas instance will have its own ticket reg database that is replicated
[14:49:19 CDT(-0500)] <serac> What DB platform?
[14:50:15 CDT(-0500)] <foxnesn> mysql
[14:50:36 CDT(-0500)] <serac> DB replication introduces whole new set of issues.
[14:50:45 CDT(-0500)] <foxnesn> im beginning to see that
[14:50:59 CDT(-0500)] <foxnesn> make more sense to have those separate and have their own load balancer
[14:50:59 CDT(-0500)] <serac> MySQL has multi-master repl support?
[14:51:04 CDT(-0500)] <foxnesn> yea
[14:51:16 CDT(-0500)] <serac> Are you a MySQL guru?
[14:51:22 CDT(-0500)] <foxnesn> im pretty good with it
[14:51:37 CDT(-0500)] <foxnesn> we are an oracle shop tho
[14:51:47 CDT(-0500)] <serac> So you'll be the DBA on this one?
[14:51:57 CDT(-0500)] <foxnesn> God only knows lol
[14:52:00 CDT(-0500)] <serac> haha
[14:52:15 CDT(-0500)] <foxnesn> im assuming i will have some responsibility if it fails
[14:52:28 CDT(-0500)] <foxnesn> to fix it
[14:52:43 CDT(-0500)] <serac> I have no experience with multi-master repl in practice other than to say it's non-trivial.
[14:52:56 CDT(-0500)] <serac> Not trying to scare you, just sanity check.
[14:54:06 CDT(-0500)] <foxnesn> well the purpose of a ticket reg is so that you can go HA
[14:54:23 CDT(-0500)] <foxnesn> i dont see how you cant replicate the databases without being HA
[14:54:37 CDT(-0500)] <serac> The kind of repl matters.
[14:54:45 CDT(-0500)] <serac> Active, multi-master repl is hard.
[14:54:51 CDT(-0500)] <serac> We do lazy repl ala PostgreSQL.
[14:55:04 CDT(-0500)] <serac> So it's possible to lose some data on DB node failure.
[14:55:07 CDT(-0500)] <serac> But who cares.
[14:55:16 CDT(-0500)] <serac> This is authenticated state data.
[14:55:24 CDT(-0500)] <serac> At worst you'll have to log in again.
[14:55:28 CDT(-0500)] <foxnesn> agreed
[14:55:39 CDT(-0500)] <serac> Inconvenient but workable.
[14:58:21 CDT(-0500)] <wgthom> fox: Lamar U is doing mysql multi-master mysql. i could you put you in touch with them
[14:59:29 CDT(-0500)] <foxnesn> cool, are they doing it for CAS or something else?
[14:59:55 CDT(-0500)] <wgthom> yes, cas
[15:00:33 CDT(-0500)] <wgthom> httpd/tomcat/mysql on each node
[15:00:55 CDT(-0500)] <wgthom> ltm in the front
[15:01:30 CDT(-0500)] <serac> I'll just say that if I had to do it over again I'd probably go with 2 nodes of tomcat+memcached and be done with it. It's the lightest and easiest HA setup IMO.
[15:02:05 CDT(-0500)] <serac> memcached is so simple and has really elegant failure modes.
[15:02:24 CDT(-0500)] <wgthom> simpler is better!
[15:02:29 CDT(-0500)] <wgthom> lamar had mysql gurus
[15:02:45 CDT(-0500)] <serac> Kind of the same deal here – Oracle gurus initally, then PostgreSQL gurus.
[15:03:40 CDT(-0500)] <foxnesn> memcache is the default on 3.4.10 right?
[15:03:54 CDT(-0500)] <serac> Nope, default is still in-memory.
[15:03:58 CDT(-0500)] <foxnesn> ahh
[15:04:28 CDT(-0500)] <foxnesn> serac, but you still need a load balancer then?
[15:04:39 CDT(-0500)] <foxnesn> just no mysql backend
[15:05:34 CDT(-0500)] <serac> Correct, need LB. But the per-node setup is much simpler and lighter IMO.
[15:05:52 CDT(-0500)] <serac> You might still need a non-HA database for service management.
[15:06:23 CDT(-0500)] <serac> But services in the database are loaded at startup and polled periodically for changes.
[15:06:36 CDT(-0500)] <serac> You can run happily with a DB failure in that scenario.
[15:07:14 CDT(-0500)] <foxnesn> yea
[15:07:46 CDT(-0500)] <foxnesn> couldnt i always hardcode the services into the deployer?
[15:07:56 CDT(-0500)] <foxnesn> so on restart they come back?
[15:08:08 CDT(-0500)] <serac> Yes
[15:08:09 CDT(-0500)] <wgthom> yes, in deployConfiguContext
[15:08:15 CDT(-0500)] <foxnesn> yea
[15:08:28 CDT(-0500)] <foxnesn> for our purposes i dont think we will need a db for just that
[15:08:31 CDT(-0500)] <foxnesn> if it means keeping it lean
[15:08:44 CDT(-0500)] <foxnesn> we only have 7-8 cas clients
[15:08:47 CDT(-0500)] <wgthom> only downside is you can't use the Services Management UI quite yet
[15:08:50 CDT(-0500)] <foxnesn> not tough to manage
[15:08:58 CDT(-0500)] <wgthom> but not a big probem with 8 cas clients
[15:09:16 CDT(-0500)] <foxnesn> yea, well once i get this worked out then i get to work on how the themes work
[15:09:26 CDT(-0500)] <foxnesn> so that alumni see a different login page then students
[15:09:28 CDT(-0500)] <foxnesn> bah
[15:09:49 CDT(-0500)] <foxnesn> i dont even know if that can be done
[15:10:45 CDT(-0500)] <serac> Can be done to some extent of "different."
[15:11:34 CDT(-0500)] <foxnesn> also what about sending certain ldap users to a specific service
[15:11:38 CDT(-0500)] <foxnesn> can that be done?
[15:11:46 CDT(-0500)] <foxnesn> like if they are in the alumni domain ou
[15:12:10 CDT(-0500)] <foxnesn> i assuming some sort of javascript
[15:12:46 CDT(-0500)] <serac> On initial authentication it might be possible with fairly involved webflow customizations, but generally no since LDAP data like OU would only be available at login, but generally not at ST generation time.
[15:13:49 CDT(-0500)] <foxnesn> so if you guys use your /cas/login as your gateway and the user has not requested a service yet where do you send them?
[15:14:03 CDT(-0500)] <serac> "Generic" login page.
[15:14:17 CDT(-0500)] <foxnesn> the one that just says "successfully logged in" ?
[15:14:29 CDT(-0500)] <serac> Yeah, though it would be easy to send all users somewhere else if you wanted.
[15:14:32 CDT(-0500)] <foxnesn> i see that in the flow
[15:14:56 CDT(-0500)] <foxnesn> true just create a view with a jsp redirect
[15:15:33 CDT(-0500)] <serac> Or change the webflow that uses a redirect view instead of a jsp-based view. Lots of options.
[15:17:05 CDT(-0500)] <foxnesn> you mean the external redirect?
[15:17:12 CDT(-0500)] <serac> yes
[15:17:39 CDT(-0500)] <foxnesn> cool
[15:21:59 CDT(-0500)] <atilling> we're clustering with ehcache instead or memcache because there was almost no documentation for memcache
[15:22:16 CDT(-0500)] <foxnesn> i see that
[15:22:43 CDT(-0500)] <wgthom> atilling: what are you using for distribution? rmi?
[15:24:12 CDT(-0500)] <atilling> yes
[15:24:37 CDT(-0500)] <wgthom> nice.
[15:24:44 CDT(-0500)] <atilling> When I looked into memcache there was a note that you needed to use repcache and there was zero docs for repcache
[15:24:54 CDT(-0500)] <wgthom> UNE has the same set up and is pretty happy with it
[15:25:33 CDT(-0500)] <foxnesn> ehcache stores tickets where?
[15:25:47 CDT(-0500)] <foxnesn> caches
[15:25:48 CDT(-0500)] <foxnesn> duh
[15:25:50 CDT(-0500)] <foxnesn> but where lol
[15:26:01 CDT(-0500)] <atilling> memory but it exchanges the info between servers