Founded in 2016, Dexciss Technology Pvt. Ltd is a bootstrapped company based in Pune. It has 6-50 employees currently and works in the domain of Enterprise Software.
About Achira: Achira's cutting-edge micro fluidics technology empowers patients and doctors with convenient and timely access to accurate medical testing. We develop a proprietary lab-on-chip platform to perform rapid, quantitative and multiplexed immunoassays at a low cost. Core Values: • Translate cutting edge research into products that meet the market demand • Constantly innovate to build integrated solutions for healthcare needs • Ethical and Professional Standards and behavior • Encourage employee creativity; promoting merit and commitment Job Description: The selected candidate needs to possess the necessary skill to do the following task • Design and test a User Interface which articulate user perception for a medical diagnostic equipment • Create a backend database to acquire data from our instrument and making a front end application to visualize the same • The candidate is expected to possess strong knowledge in Data structures and Algorithms as varying amounts of data and data types are handled. • Very good with software version control and usage of Git tools. • Prior experience of Python app development and cloud database is an advantage. • Strong Knowledge on Linux based commands, Linux Operating system is required The candidate should work with interdisciplinary teams to achieve the end goals. Delivering the projects in competitive timelines is a common quality we expect in any candidate. Experience: 0-2 Years Who can apply? • Candidates who are ready to put their software skill to use by creating quality medical product to build a healthier world • Have Bachelor's degree specifically in Computer Science, Software Engineering, Information technology, or other Engineering or Technical discipline Other requirements Skill(s) Must have: Python, Java, C/C++ Programming, OOPS concepts, Data Structures, HTML/CSS. Skill(s) Good to have: Basic Android App Development, Angular JS
Data Scientist - We are looking for a candidate to build great recommendation engines and power an intelligent m.Paani user journey Responsibilities : - Data Mining using methods like associations, correlations, inferences, clustering, graph analysis etc. - Scale machine learning algorithm that powers our platform to support our growing customer base and increasing data volume - Design and implement machine learning, information extraction, probabilistic matching algorithms and models - Care about designing the full machine learning pipeline. - Extending company's data with 3rd party sources. - Enhancing data collection procedures. - Processing, cleaning and verifying data collected. - Ad hoc analysis of the data and present clear results. - Creating advanced analytics products that provide actionable insights. The Individual : - We are looking for a candidate with the following skills, experience and attributes: Required : - Someone with 2+ years of work experience in machine learning. - Educational qualification relevant to the role. Degree in Statistics, certificate courses in Big Data, Machine Learning etc. - Knowledge of Machine Learning techniques and algorithms. - Knowledge in languages and toolkits like Python, R, Numpy. - Knowledge of data visualization tools like D3,js, ggplot2. - Knowledge of query languages like SQL, Hive, Pig . - Familiar with Big Data architecture and tools like Hadoop, Spark, Map Reduce. - Familiar with NoSQL databases like MongoDB, Cassandra, HBase. - Good applied statistics skills like distributions, statistical testing, regression etc. Compensation & Logistics : This is a full-time opportunity. Compensation will be in line with startup, and will be based on qualifications and experience. The position is based in Mumbai, India, and the candidate must live in Mumbai or be willing to relocate.
This role for a android application developer who can work remotely. We are looking for a creative, self motivated developer who is keen on working in a profitable startup company.
Mandatory Skills: v • 2-3+ Years’ experience working as Python developer. With good understanding of Numpy • Developed programs for Data Applications or Data Analysis tasks • Fundamentals of Mathematics (Linear Algebra, Probability, Vectors, Matrix) • DevOps Basics - Server, Deployment, Shell Commands, Git, Database • Ability to solve newer problems with innovative designs. • Engineering from good Institute Good to Have: • Working with large datasets • R or Spark Programming language Interested candidates can send their updated resume on Thouseef.Ahmed@suventure.in
he candidate will be responsible for all aspects of data acquisition, data transformation, and analytics scheduling and operationalization to drive high-visibility, cross-division outcomes. Expected deliverables will include the development of Big Data ELT jobs using a mix of technologies, stitching together complex and seemingly unrelated data sets for mass consumption, and automating and scaling analytics into the GRAND's Data Lake. Key Responsibilities : - Create a GRAND Data Lake and Warehouse which pools all the data from different regions and stores of GRAND in GCC - Ensure Source Data Quality Measurement, enrichment and reporting of Data Quality - Manage All ETL and Data Model Update Routines - Integrate new data sources into DWH - Manage DWH Cloud (AWS/AZURE/Google) and Infrastructure Skills Needed : - Very strong in SQL. Demonstrated experience with RDBMS, Unix Shell scripting preferred (e.g., SQL, Postgres, Mongo DB etc) - Experience with UNIX and comfortable working with the shell (bash or KRON preferred) - Good understanding of Data warehousing concepts. Big data systems : Hadoop, NoSQL, HBase, HDFS, MapReduce - Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. - Working with data delivery teams to set up new Hadoop users. This job includes setting up Linux users, setting up and testing HDFS, Hive, Pig and MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools like Ganglia, Nagios, Cloudera Manager Enterprise, and other tools. - Performance tuning of Hadoop clusters and Hadoop MapReduce routines. - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - File system management and monitoring. - HDFS support and maintenance. - Collaborating with application teams to install operating system and - Hadoop updates, patches, version upgrades when required. - Defines, develops, documents and maintains Hive based ETL mappings and scripts
We are looking for a creative candidate who can join our team to build new products. Join us to learn more.
Job description We are building a platform to connect car care providers with car owners . We help partners with booking, pricing, parts , customer communication. Need someone who can drive our solutions and take ownership. Skillset 1) Experience building scalable systems with micro services architectures 2) Knowledge on Django web framework 3) Experience identifying proper open source systems and customize as per our requirements 4) Understanding of Devops and Automation,monitoring systems - Ansible,Jenkins,Nagios 5) Handson experience on Django,php ,mysql/mongodb 6) Experience with Mobile apps (Android,iOS), Progressive web apps 7) Build and mentor teams in the past 8) Knowledge on OBD (On board diagnostic system) Responsibilities: 1) Ownership of our service delivery platform 2) Designing technical architectures for complex solutions 3) Providing technical and thought leadership on agile teams 4) Writing great software adhering to agile software engineering practices (e.g., TDD, continuous integration, automated tests, etc.) 5) Code review and mentor ship of other developers on agile team 6) Collaborating with product owners and other stakeholders This is more of a equity based role with salary component attached