Under this menu item above you will find other pages all about me
- How to reach me
- My resume with my education, skills, and experience,
- My certifications
- Links to my recommendations on LinkedIn.com
So, if you’re coming here, I’m glad to see you!
And, if you’re one of those really brilliant managers who actually know how to hire a highly driven, competent, IT professional who contributes greatly to the success of the team and company and management he works for, then I’m even more glad to see you.
Here is my resume: ResumeDanDick20151005
Daniel J. Dick
My Current Love–Artificial Intelligence
For decades, my love for Information Technology expressed itself in many ways.
- mathematical and engineering programs,
- sticky financial calculations non-mathematicians were not comfortable doing
- development of business accounting software.
- Linux, Unix, and Windows systems administration
- database internals, administration, and performance
- application support such as PeopleSoft and Epic
- website development and administration.
During the last few years, my interest has returned to where it was during my college years:
Mathematics and Artificial Intelligence.
Years ago, my interest involved programming symbolic mathematics programs in Lisp and Prolog.
Now it involves
- Neural Networks,
- Convolution Nets or ConvNets and the move to Capsule networks
- Deep learning and Machine Learning in general,
- Reinforcement learning,
- Meta Learning,
- Recursive or time sequence networks, (RNN and LSTM)
- Optimization of parameters, features, models,
One thing not quite related but interests me greatly is Quantum Computing. However, I am disciplining myself to postpone diving into it. I am using it as a reward for gaining sufficient mastery in Machine Learning to get some Kaggle competitions under my belt first.
Instead of using programs like Maple and Mathematics, I end up using Matlab, Octave, and Jupyter Notebook while studying online classes from Andrew Ng from Stanford and watching videos from other Machine Learning greats like Lex Friedman, Geoffrey Hinton, Ilya Sutskeever, Siraj Raval, and others.
Right now, I am working on developing the skills to master this and be able to knock off Kaggle competitions easily. And following that, I hope to bring up my skills in quantum computing and develop algorithms that would facilitate machine learning. In preparation for this I might like to invest a little time expanding some of the internals of common deep learning or machine learning packages to other GPUs than NVIDIA’s CUDA processors. But I want my main focus to be a little higher up in the practical usage of those packages for real world projects such as auto-driving cars and autonomous flight, medical diagnosis, financial or stock advisement rather than mere predictions but strategy as well, and more.
For those interested in my project management methods, I like to formalize things only as much as is necessary. I like to keep things simple.
Where projects are larger and complex, I can be flexible. If a project requires an older waterfall approach where everything is defined to the smallest detail ahead of time and test suites are designed for code before code is developed, I can work in that environment. Often older companies and governments prefer that approach.
Where projects almost leave the realm where projects and maintenance work is separated, often an Agile or Kanban approach works where user stories come in, where projects are designed and the designs are tried out, where tasks are defined on the fly, prioritized and flow from one silo of activity to another until complete.
I was trained by PMI back in the 1990s when PMI was more in a formative phase, where project management was not as comprehensively defined as it is today in the PMBOK. I never tested for the PMP certification but I did take Brainbench exams in various project management categories and generally scored in the high 90 percentiles.
Detangling and Turn-Around Projects
One skill I have been recognized for is detangling or turn-around consulting.
One pharmacy benefits management company inherited about $400,000,000 in Medicaid business per year suddenly. They had been a commercial company and knew almost nothing about Pharmacy Encounters files. Now they had to provide those in different formats for health plans in several states or face millions of dollars in fines and lost blood products revenue.
A few brilliant and skilled PL/SQL developers responded to this need and created some complicated SQL joins to match claims up with records from other tables–tables for physicians, pharmacies, insurance plans. They did a great job pulling data from the database and putting it into the form according to the specifications they were given. But these reports required a more comprehensive architecture.
Data Gets Hopelessly Scrambled
Unfortunately, the developers would get deluged with competing requests from several health plans. There were missing physician IDs and missing Pharmacy IDs and claims were being missed. Some data was incorrectly entered and overran the formats of various fields. And while the states continued to return error files, there were no mechanisms built to scan in those error files and produce corrections. Nor was there a way to determine which claims had not been sent. And what was needed was better error detection, some helpful and reliable data mining for some data that could be properly mined, and a mechanism for sending in claim reversals, waiting for recognition of the reversal and sending in the corrected claim.
Data Gets Unscrambled
Since we could not afford to stop sending in the encounters files, I held weekly meetings with leadership in all of Centene’s subsidiary companies to establish priorities and expectations for the following week.
By the time I left the company, our senior VP said our encounters were second only to one other department, and that department had a very large staff and a very small volume.
During that time, I developed parsing programs to scan the files we had previously sent back into the database to track what we had sent and what errors had been reported back to us. I split out the massive, ugly SQL joins into separate cursors and functions and designed the encounters program to be more general in nature separating the queries from the presentation of the formatted data.
Filling in Missing Data Accurately
Where claims were missing a physician ID, that was reported and an attempt was made to determine if the claim at hand was for a maintenance medication for which the same prescription had been used. During the switch from DEA and state Medicaid numbers to NPI numbers, new resident doctors often did not have IDs of their own, so they would submit prescriptions with an institutional DEA number and their own last name. However, the states had begun to require reporting to be provided with NPI numbers. So, for a claim with no physician NPI number, I would check the RX number for other claims which indeed had an NPI number corresponding to a physician with the same last name and report on the match so it could be corrected in the database while sending the data through to the state.
We dealt with approximately 68,000 pharmacies, so I also had to develop similar techniques for recovering pharmacy IDs or identifying those which were missing.
For some states, hundreds of thousands of records had been sent through with numbers skewed to the left or to the right causing them to be multiplied or divided by ten, and some had blown the width of the field and had been replaced with pound signs.
Some health plans in some states provided ids by which we could identify the encounter record that was sent. For others, and New Jersey in particular, to effect a correction, I had to look for information that would uniquely identify the record sent to the state and match that against the error report coming back from the state. And then I would have to send a reversal. It would have to be accepted and then the replacement would be sent and accepted in the next encounters run.
Tens of Millions of Dollars Saved
In the end, I believe I helped save the company tens of millions of dollars in lost revenue and fines. And I had a tremendous amount of fun doing that where other developers left in frustration after about 3 months of struggles and pressure.
I am different. I enjoy a good, tough challenge. For me, an impossible challenge is more enjoyable and easier to handle than a boring challenge.
Other Non-Epic Skills in Computer Science and Mathematics
Although my focus is Epic technical work, I am also highly experienced and skilled in Unix Administration, database, and web administration, Drupal, WordPress, and am becoming strong with DevOps and Docker. I also have a B.A. in Mathematics and a minor in Physics from Fresno State, and 27 units of graduate study in Computer Science at Stanford University with a focus on Advanced Systems and Databases.
Thank you, and have an awesome day!
Daniel J. Dickf
Send me a message here!
[contact-form-7 id=”566″ title=”Contact Me!”]
Please contact me and I will be more than happy to discuss how I might be of help to you.