Inspirata provides a cancer diagnostics solution that digitizes and automates the pathology workflow using a unique, “solution-as-a-service” delivery model.
We are looking people who at-least worked for 1 year on ethereum, solidity, databases and backend. Candidate should have a sound knowledge of data structures.
Role : SSE / Lead Java Integration - Mule ESB - - Job Location Pune - Job Title: Mule ESB Developer / Lead - - Required Skills: - Strong working experience on Mule ESB - Good experience in Java, J2EE, WebServices, SOA, Spring and Hibernate - 3+ years of IT experience with at least 1+ years of Mule Experience - - Desired Skills: - Has a good practical understanding of technology and its application. - Good grasp of technology and tools used for development. - Fair understanding of project management skills. - Fair amount of domain expertise gained through working on the application - or certification programs. - Skill Required : - Must have experience in developing Mule ESB flows using Java and Mulesoft - Connectors. - Experience of developing ESB solutions using technologies like JSON, - RestFul APIs, XML, FTP, AWS S3, MYSQL, HTTP, RAML - Strong hands on java development experience - Nice to have: Spring and JPA - - - Pl share your resume with the following info : - - Current Salary : - Exp Salary : - Notice Period: - How soon can you join if selected : - Available to attend the F-F interview on Week Days : - - > - - Thanks.. - Vandana Saxena - Manager-HR & TA - Tel: +91 20 66230903 - email@example.com - ----------------------------------------
candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
Job Description: • 7+ Years’ of relevant experience • Ensure delivery of high quality software-generated, data-driven reports. • Pursue ad-hoc analysis of new data sources and visualizations to support current customer needs. • Own data and analytics based product development projects – i.e. calibration, data based insights, etc. • Should have worked on ETL Processes. • Data Modelling, Data imports on databases. • Strong programing experience in Python and R. • Experience with PostgreSQL. • Data Scrapping, Web Data Extraction, Crawler and Scraping skills • Should be familiar with GitLab (Good to have). • Experience using SQL to extract data from well-structured data store. • Experience in Energy, Oil and Gas industry is added plus. Interested candidates can send their updated resume to Thouseef.Ahmed@suventure.in
Couture.ai provides Artificial Intelligence as SaaS offering for global online retailers and fashion brands. Our prediction technology helps retailers tailor experiences for their customer, even without any prior interactions with them. We use our state of art DeepLearning technology to predict the behavior of newly acquired users, which helps retailers tailor experiences for each customer even without any prior interactions with them. After integrating our sdk with initial clients, we have seen product view increased by 25% and sales conversion went up to 3 times. Credible display of innovation in past projects (or Academia) is a must. We are looking for a candidate who lives and talks Data & Algorithms, love to play with Machine Learning libs, hands-on with RDBMS/NoSQL DBs, Big Data Analytics and forming Insights based on data, handling Unix & Production Server etc. Tier-1 college (BE from IITs, BITS-Pilani, top NITs, IIITs or MS in Stanford, Berkley, CMU, UW–Madison) is must. exceptionally bright engineers are welcome. Let us know, if this interests you to explore the profile further.
RedLock is a fast growing cyber-security startup headquartered in California, with offices in Hyderabad.
Phynart makes smart home devices that help users control everyday appliances from anywhere on the planet. We have been awarded as the best consumer electronic company for the year 2015 by the centre of innovation and incubation in india in partnership with nasscom
You will be working in the Cancer Information Data Trust (CIDT) business unit within Inspirata, India at Bangalore. Inspirata is creating the most innovative cancer information big data with associated analytics that renders data to various portals. The Data Integrator will be a critical role in bringing accurate and conditioned Data to CIDT and Digital Pathology Products. Your Role Sr. Software Engineer – Data Integration is an active and influential member of CIDT team and is responsible for the development of Inspirata’s Data Integration Bus Solution. Ideal candidate is a highly experienced engineer with exceptional skills, has an aptitude for integration technologies and the drive and desire to push the boundaries to solve complex problems. Your Responsibilities • Design and development of a middleware bus technology • Have the developed product acquire data residing in different sources of medical data elements and provide a unified and trusted view of the data • Experience with Integration/ESB tools such as Orion Rhapsody, CorePoint, Informatica, TIBCO, Talend or similar tools • Have deep understanding of data management best practices, including ETL, data modeling, data management, file management, reference data management, etc. • Should have at least working knowledge of integrating unstructured data (NO SQL) along with structured data (RDBMS) • Ability to identify multiple approaches to problem solving and recommend the best case solution Requirements • B.E. / M.Tech in Computer Science or related field with 5 - 8 years of product development experience in ETL, Data Modeling, SQL, Data Analysis in a health care industry (preferred) • Knowledge of HL7 protocols, PHI regulations • Good communication skills
We are now looking for passionate DATA MIGRATION DEVELOPERS to work in our Hyderabad site Role Description: We are looking for data migration developers to our BSS delivery projects. Your main goal is to analyse migration data, create migration solution and execute the data migration. You will work as part of the migration team in cooperation with our migration architect and BSS delivery project manager. You have a solid background with telecom BSS and experience in data migrations. You will be expected to interpret data analysis produced by Business Analysts and raise issues or questions and work directly with the client on-site to resolve them. You must therefore be capable of understanding the telecom business behind a technical solution. Requirements: – To understand different data migration approaches and capability to adopt requirements to migration tool development and utilization – Capability to analyse the shape & health of source data – Extraction of data from multiple legacy sources – Building transformation code to adhere to data mappings – Loading data to either new or existing target solutions. We appreciate: – Deep knowledge of ETL processes and/or other migration tools – Proven experience in data migrations with high volumes and in business critical systems in telecom business – Experience in telecom business support systems – Ability to apply innovation and improvement to the data migration/support processes and to be able to manage multiple priorities effectively. We can offer you: – Interesting and challenging work in a fast-growing, customer-oriented company – An international and multicultural working environment with experienced and enthusiastic colleagues – Plenty of opportunities to learn, grow and progress in your career At Qvantel we have built a young, dynamic culture where people are motivated to learn and develop themselves, are used to working both independently as well as in teams, have a systematic, hands on working style and a can-do attitude. Our people are used to communicate across other cultures and time zones. A sense of humor can also come in handy. Don’t hesitate to ask for more information from Srinivas Bollipally our Recruitment Specialist reachable at Srinivas.firstname.lastname@example.org