Inspirata provides a cancer diagnostics solution that digitizes and automates the pathology workflow using a unique, “solution-as-a-service” delivery model.
he candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
Data Engineer Position Overview Our work aims to improve safety and performance in the oil and gas industry through risk analysis. A major part of this involves creation and curation of our proprietary data store. On any given day, our data engineers work to understand the details of what new public and private data sources mean, build the software processes that scrape this data, and suggest creative ways in which that data can be used. • Should have worked on ETL Processes, • Data Modelling, • Data imports on databases, • Python programming, • Data Scrapping Web Data Extraction, • Crawler and Scraping skills • Familiar with Scrapy Framework • Familiar with GitLab, • Experience with PostgreSQL DB design, DB architecture, someone who can and has designed DB from scratch - should be able to write schemas and tables etc. Responsibilities • Research meaning of data fields, files, and webpages associated with external public & private data sources identified as relevant to Suventure products. • Create Python scrapers to programmatically fetch relevant data from external data sources, parse data, and store in database • Determine plans of action of regular data updates and sequence of operations for scraping linked data sources • Implement above plans • Document oddities, edge cases, and other interesting qualities of data sources as scrapers are built and tested Interested candidates can send their updated resume to Thouseef.Ahmed@suventure.in
Couture.ai provides Artificial Intelligence as SaaS offering for global online retailers and fashion brands. Our prediction technology helps retailers tailor experiences for their customer, even without any prior interactions with them. We use our state of art DeepLearning technology to predict the behavior of newly acquired users, which helps retailers tailor experiences for each customer even without any prior interactions with them. After integrating our sdk with initial clients, we have seen product view increased by 25% and sales conversion went up to 3 times. Credible display of innovation in past projects (or Academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with Machine Learning libs, hands-on with RDBMS/NoSQL DBs, Big Data Analytics and forming Insights based on data, handling Unix & Production Server etc. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) is must. exceptionally bright engineers are welcome. Let us know, if this interests you to explore the profile further.
RedLock is a fast growing cyber-security startup headquartered in California, with offices in Hyderabad.
Phynart makes smart home devices that help users control everyday appliances from anywhere on the planet. We have been awarded as the best consumer electronic company for the year 2015 by the centre of innovation and incubation in india in partnership with nasscom
You will be working in the Cancer Information Data Trust (CIDT) business unit within Inspirata, India at Bangalore. Inspirata is creating the most innovative cancer information big data with associated analytics that renders data to various portals. The Data Integrator will be a critical role in bringing accurate and conditioned Data to CIDT and Digital Pathology Products. Your Role Sr. Software Engineer – Data Integration is an active and influential member of CIDT team and is responsible for the development of Inspirata’s Data Integration Bus Solution. Ideal candidate is a highly experienced engineer with exceptional skills, has an aptitude for integration technologies and the drive and desire to push the boundaries to solve complex problems. Your Responsibilities • Design and development of a middleware bus technology • Have the developed product acquire data residing in different sources of medical data elements and provide a unified and trusted view of the data • Experience with Integration/ESB tools such as Orion Rhapsody, CorePoint, Informatica, TIBCO, Talend or similar tools • Have deep understanding of data management best practices, including ETL, data modeling, data management, file management, reference data management, etc. • Should have at least working knowledge of integrating unstructured data (NO SQL) along with structured data (RDBMS) • Ability to identify multiple approaches to problem solving and recommend the best case solution Requirements • B.E. / M.Tech in Computer Science or related field with 5 - 8 years of product development experience in ETL, Data Modeling, SQL, Data Analysis in a health care industry (preferred) • Knowledge of HL7 protocols, PHI regulations • Good communication skills
We are now looking for passionate DATA MIGRATION DEVELOPERS to work in our Hyderabad site Role Description: We are looking for data migration developers to our BSS delivery projects. Your main goal is to analyse migration data, create migration solution and execute the data migration. You will work as part of the migration team in cooperation with our migration architect and BSS delivery project manager. You have a solid background with telecom BSS and experience in data migrations. You will be expected to interpret data analysis produced by Business Analysts and raise issues or questions and work directly with the client on-site to resolve them. You must therefore be capable of understanding the telecom business behind a technical solution. Requirements: – To understand different data migration approaches and capability to adopt requirements to migration tool development and utilization – Capability to analyse the shape & health of source data – Extraction of data from multiple legacy sources – Building transformation code to adhere to data mappings – Loading data to either new or existing target solutions. We appreciate: – Deep knowledge of ETL processes and/or other migration tools – Proven experience in data migrations with high volumes and in business critical systems in telecom business – Experience in telecom business support systems – Ability to apply innovation and improvement to the data migration/support processes and to be able to manage multiple priorities effectively. We can offer you: – Interesting and challenging work in a fast-growing, customer-oriented company – An international and multicultural working environment with experienced and enthusiastic colleagues – Plenty of opportunities to learn, grow and progress in your career At Qvantel we have built a young, dynamic culture where people are motivated to learn and develop themselves, are used to working both independently as well as in teams, have a systematic, hands on working style and a can-do attitude. Our people are used to communicate across other cultures and time zones. A sense of humor can also come in handy. Don’t hesitate to ask for more information from Srinivas Bollipally our Recruitment Specialist reachable at Srinivas.firstname.lastname@example.org
We are looking for a senior ROR developer to join our team full-time. You must: 1. Like to solve challenging real world problems with software. 2. Be a ninja in Ruby and Rails (and appreciate that the two are different). 3. Enjoy working on back-end code at API level. 4. Full stack experience is a plus, but not necessary. If you don't enjoy writing front-end code, no problem (as long as you are good at back-end). Why join us? 1. Work for a product company with a real life impact - we touch lives of hundreds of patients every day. 2. Learn best practices working directly with the CTO, who is a hands-on developer with 15+ years of experience. 3. Work on a production system handling thousands of transactions every day - not some cool idea that nobody uses.