Requirements:● Should be language agnostic, with extensive and expert programming experience in any oneprogramming language (strong OO skills preferred).● Deep experience in at least one general programming language. Preferred (Java,Scala,C++)● A solid foundation in computer science, with strong competencies in data structures,algorithms, and software design.● Worked end to end on 2-3 complex projects● Worked in startup like environment with high levels of ownership and commitment● Experience in building highly scalable business applications, which involve implementinglarge complex business flows and dealing with huge amount of data.● Extensive experience on working with distributed technologies like Kafka, MongoDb,Redis/Aerospike, MySQL, AWS etc● Experience with multi-threading and concurrency programming● Ability to switch between the technologies and learn new skills on the go.● 6+ yrs of exposure in the art of writing code and solving problems for large scale.
Why you should be interested in this role? Biofourmis is pioneering an entirely new category of digital health, by developing clinically validated software-based therapeutics to provide a better outcome for patients, smarter engagements and tracking tools for clinicians. By combining Machine Learning Technology we are creating a truly unique movement in the health space. Our team works in a cross-functional agile setup consisting of mobile developers, backend developers, designers, product managers, researchers, and scrum masters. Biofourmis headquartered in Boston, develops and delivers clinically validated software-based therapeutics to provide cost-effective solutions for payers, accelerated research and drug development for biopharmaceutical companies, advanced tools for clinicians to deliver personalized care, and, ultimately, better outcomes for patients. Our robust digital therapeutics products and pipeline cover multiple therapeutic areas including heart failure, acute coronary syndrome, COPD, and chronic pain. A successful Series C round, strategic acquisitions, key commercial multi-year contracts, FDA approvals, new U.S. headquarters and industry recognition were among some of our achievements in 2019 and 2020. Customer Engineering Team at Biofourmis Our team mission is to improve the entire Customer Support infrastructure to help our Customer Care team, operations team to the deliver exceptional experiences while continuously improving the whole journey - for both User/Patient and Clinicians. What you will do Design, build and improve the experiences for Biofourmis Solution backend Partner with the product management team to define and execute the feature roadmap Coordinate with cross functional teams (Backend, DevOps, Design etc.) on planning and execution Proactively manage stakeholders communication related to deliverables, risks, changes and reliance Maintain Release Notes by documenting new services, fixes and setup configuration details. Monitoring builds and Collaborating with DevOps and Test Team members to resolve build issues. Troubleshooting Deployment Issues in conjunction with DevOps and test team members. Provide technology leadership to the team and foster engineering excellence What You Will Need 2-5 years experience in backend software development, especially Cloud-native development including microservices and serverless. 2+ years of Linux experience and Azure, AWS cloud or similar cloud service experience Knowledge expected in Cloud Computing, threading, performance tuning and security. Experienced on Serverless and Microservice application development on AWS or Azure Any Big data processing technology experience like setting data pipelines, batch/realtime processing. Any Container orchestration technology like docker, kubernetes or serverless frameworks Familiarity with data management, SQL and NoSQL databases (in-memory or otherwise). Good at python and related signal processing packages, scipy, numpy. etc. Experience in developing serverless applications in Azure, AWS (Lambda, API Gateway, DynamoDB) will be preferred Knowing Matlab or R, or other scripting language is a plus. Ability to work independently to solve XBC145 complex problems with large, real-world datasets. Strong knowledge in DevOps tools (OpenSource or otherwise) and practices and Agile software development methodology. Independent and self-motivated contributor and passionate about software development. Skills Good at python and related signal processing packages, scipy, numpy. etc. Job Location Biofourmis India Private Limited WeWork, Prestige Central #35, Infantry Road, Bengaluru - 560001 Note: The role is expected to work in different timezone, either Singapore, US Remote work option is available during pandemic period
Job Title: Data Engineer (Remote) Job Description You will work on: We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting edge cloud native technologies to crunch terabytes of data into meaningful insights. What you will do (Responsibilities): Collaborate with Business, Marketing & CRM teams to build highly efficient data pipleines. You will be responsible for: Dealing with customer data and building highly efficient pipelines Building insights dashboards Troubleshooting data loss, data inconsistency, and other data-related issues Maintaining backend services (written in Golang) for metadata generation Providing prompt support and solutions for Product, CRM, and Marketing partners What you bring (Skills): 2+ year of experience in data engineering Coding experience with one of the following languages: Golang, Java, Python, C++ Fluent in SQL Working experience with at least one of the following data-processing engines: Flink,Spark, Hadoop, Hive Great if you know (Skills): T-shaped skills are always preferred – so if you have the passion to work across the full stack spectrum – it is more than welcome. Exposure to infrastructure-based skills like Docker, Istio, Kubernetes is a plus Experience with building and maintaining large scale and/or real-time complex data processing pipelines using Flink, Hadoop, Hive, Storm, etc. Advantage Cognologix: Higher degree of autonomy, startup culture & small teams Opportunities to become expert in emerging technologies Remote working options for the right maturity level Competitive salary & family benefits Performance based career advancement About Cognologix: Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals. We are an Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern and cloud-native way. Skills: JAVA, PYTHON, HADOOP, HIVE, SPARK PROGRAMMING, KAFKA Thanks & regards, Cognologix- HR Dept.
Who we are? Searce is a niche’ Cloud Consulting business with futuristic tech DNA. We do new-age tech to realise the “Next” in the “Now” for our Clients. We specialise in Cloud Data Engineering, AI/Machine Learning and Advanced Cloud infra tech such as Anthos and Kubernetes. We are one of the top & the fastest growing partners for Google Cloud and AWS globally with over 2,500 clients successfully moved to cloud. What we believe? Best practices are overrated Implementing best practices can only make one n . Honesty and Transparency We believe in naked truth. We do what we tell and tell what we do. Client Partnership Client - Vendor relationship: No. We partner with clients instead. And our sales team comprises of 100% of our clients. How we work? It’s all about being Happier first. And rest follows. Searce work culture is defined by HAPPIER. Humble: Happy people don’t carry ego around. We listen to understand; not to respond. Adaptable: We are comfortable with uncertainty. And we accept changes well. As that’s what life's about. Positive: We are super positive about work & life in general. We love to forget and forgive. We don’t hold grudges. We don’t have time or adequate space for it. Passionate: We are as passionate about the great street-food vendor across the street as about Tesla’s new model and so on. Passion is what drives us to work and makes us deliver the quality we deliver. Innovative: Innovate or Die. We love to challenge the status quo. Experimental: We encourage curiosity & making mistakes. Responsible: Driven. Self motivated. Self governing teams. We own it. Responsibilities : As a Data Architect, you work with business leads, analysts and data scientists to understand the business domain and manage data engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions. If you love to solve problems using your skills, then come join the Team Searce. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself. What You’ll Do Understand the business problem and translate these to data services and engineering outcomes Explore new technologies and learn new techniques to solve business problems creatively Collaborate with many teams - engineering and business, to build better data products Manage team and handle delivery of 2-3 projects What We’re Looking For Over 4-7 years of experience with Hands-on experience of any one programming language (Python, Java, Scala) Understanding of SQL is must Big data (Hadoop, Hive, Yarn, Sqoop) MPP platforms (Spark, Presto) Data-pipeline & scheduler tool (Ozzie, Airflow, Nifi) Streaming engines (Kafka, Storm, Spark Streaming) Any Relational database or DW experience Any ETL tool experience Hands-on experience in pipeline design, ETL and application development Hands-on experience in cloud platforms like AWS, GCP etc. Good communication skills and strong analytical skills Experience in team handling and project delivery
Strong Experience in designing and developing Big data applications Hadoop, spark/flink/storm, Kafka, Hive, Hbase, java/scala, Airlflow/oozie/Nifi, Redis/hazelcast/experince in cosmos db, azure synapse, azure data factory is a plus
Job Description: • As a Python full-stack developer, your role would involve design develop and deploy full-stack application out of Artificial intelligence projects with a focus on low latency and scalability. • You also need to optimize the application for better performance and a large number of concurrent users.• A strong technologist we cares about doing things the right way rather than just doing them and thrives in a complex and challenging environment. Who are we looking for? • Bachelors / Masters in Computer Science or equivalent with at least 3+ years of professional experience. • Solid understanding of design patterns, data structures, and advanced programming techniques • As an Engineer in our team, you will design, code, test, and debug quality software programs. • Strong software design and architectural skills in object-oriented and functional programming styles.• Python, Celery, RabbitMQ, Kafka, Multithreading, Async, Microservices, Docker, Kubernetes.• Experience in working with Machine Learning Pipelines • Experience in Reactjs. • Experience in Celery, RabbitMQ/Kafka. • Experience in Unit Testing Tools. • Experience in working with SQL & NonSql databases such as MySQL, Mongo DB. • Exposure to cloud technologies. • Demonstrate the ability to work in a fast paced and hyper-growth environment where the requirements are constantly changing. • Nice to have: Experience developing products containing machine learning use cases. • Familiar with agile techniques like code reviews, pair programming, collective code ownership, clean code, TDD and refactoring.
Lead Data EngineerExperience : 6-12 YearsLocation : BangaloreType : Full-timeAbout Digit88Digit88 is a niche product engineering consulting company based out of Bangalore with experience of building offshore development centers for US startups and MNCs over the last 6+ years. The founding team has 50+ years of product engineering and services experience out of India and US.The OpportunityDigit88 development team manages and is expanding the dedicated offshore product US (Bay Area, NYC) based NLP/Chatbot platform development partner, that is building a next-generation AI/NLP/Chatbots based customer engagement platform. The candidate would be joining an existing team of 16+ engineers and help expand Platform Engineering, Production Support and Monitoring services for our client.Job Profile:Digit88 is looking for an enthusiastic, self-motivated, hands on Lead Data Engineer in ETL pipeline and Data analytics with great troubleshooting skills to join our engineering team. Experience with a fast-paced India/US product start-up or a product engineering services company in a senior/lead engineer role, building and managing a high-performance real-time system is mandatory.The applicant should have the right experience in instrumenting the applications with a client library to capture the data asynchronously to avoid overhead on the application thread and push it for further processing to derive real time analytics. Applicants must have a passion for engineering with accuracy and efficiency, be highly motivated and organized, able to work as part of a team, and also possess the ability to work independently with minimal supervision. you can be able to explain some data pipeline architecture to a given problem.To be successful in this role, you should possess:● Extensive experience in Spark and Kafka● Working Knowledge in Java, microservices and Springboot● Extensive experience as a data engineer and able to build the data pipelines and also understanding of the ETL pipelines● Extensive experience in creating and maintaining data aggregation layer using Spark● Experience in handling and successfully managing huge volume of streaming data● Processing events using Spark● Experience in setting up and running jobs on Spark● Experience in Analytics highly desirable● Translate complex functional and technical requirements into detailed design.● Extensive work exper5ience in scalable and high performance systems● Working knowledge of Linux commands and scripting● File queuing on Hadoop● Knowledge in Druid/elastic-search is a definite PLUS.Minimum Qualifications:● Bachelor's degree in Computer Science or a related field● 5+ years experience in ETL pipeline and Data analytics.● 5+ years experience building successful production software systems● 2+ years of experience working with NoSql (Cassandra/MongoDb/DynamoDb/Azure Cosmos DB), includingdata modeling techniques for NoSql.● 1+ years experience working with Hive ETL/QL and building Map/Reduce programs on HDFS, this should becovered indirectly when we say Hadoop BigData, but we can be explicit.Additional Project/Soft Skills:● Product from scratch experience: at least 2 products, should be able to work independently with India & US based team members.● Strong verbal and written communication with ability to articulate problems and solutions over phone and emails.● Strong sense of urgency, with a passion for accuracy and timeliness.● Ability to work calmly in high pressure situations and manage multiple projects/tasks.● Ability to work independently and possess superior skills in issue resolution.
Get your career on a pedestal with a major IT services company, led by successful entrepreneurs. Our client is a sales and marketing platform for Insurance businesses. Their partners include major insurance companies, brokerage companies, NBFCs and Insurance intermediaries. Their AI based app offers their partners with a platform to enhance their product sales, distribution and far-reaching expansion. This also leads to easy and affordable insurance products and an empowering transformational experience for the end-customers. The co-founders are business and tech experts from IIT, Symbiosis and MDI, who bring with them nearly 50 years of experience in Finance and IT industries, and successful entrepreneurial wisdom too. Working out of Bengaluru and Gurgaon, the team has created a single standardized platform using AI and blockchain technology to bring clarity and efficiency to the insurance sales and distribution cycle. As a Tech Lead - Backend (Node.js), you will act as a primary Interface with the senior management and operations team, to support a high-performance and exponentially scalable product. What you will do: Providing technical guidance as well hands on management for all product development within the company. Identifying frameworks/ technologies/ languages/ libraries to be used to achieve desired goals. Underscores pros and cons of various technologies available and presenting arguments to management and technical team. Taking responsibility for refactoring existing code. Leading, managing and mentoring a team of 5+ engineers. Serving as a key member of the management team that sets the company's strategic direction. Defining standards & best practices to support agile development processes. Planning, tracking and estimating product development activities. Ensuring the optimal application of technology and engineering resources to meet product development and/ or customer requirements as per the product and/ or marketing requirements document. Ensuring that the product quality is world class at all times. Growing the internal information technology development organisation; managing and recruiting a multidisciplinary high-performance technology team. Developing RESTful backend services if needed Ensuring implementation of formal processes to support the product development process. Passionate about automated testing: Managing / driving testing - unit tests, system tests, regression. Performance Tuning/ Profiling: In-depth understanding of popular architectures SOA, RESTful, Microservices, Messaging Bus. What you need to have: B.E / B.Tech or similar qualification from a premier institute. Good communication skills. Self-starter, Highly motivated. People Management skills. Go-getter attitude. 3 - 7 years of hands-on development experience in backend development with a track record of solid technical accomplishments. Node.js experience will be preferred. Knowledge of multiple programming languages will be preferred. Knowledge of Caching solutions Redis, Memcache Databases experience - MySQL/NoSQL, RabbitMQ, Kafka. Capability to present different architecture for the same problem. Knowledge of popular front end MVC technologies. Understanding of production level problems and their possible solutions
Responsibilities for Data Architect Research and properly evaluate sources of information to determine possible limitations in reliability or usability Apply sampling techniques to effectively determine and define ideal categories to be questioned Compare and analyze provided statistical information to identify patterns, relationships and problems Define and utilize statistical methods to solve industry-specific problems in varying fields, such as economics and engineering Prepare detailed reports for management and other departments by analyzing and interpreting data Train assistants and other members of the team how to properly organize findings and read data collected Design computer code using various languages to improve and update software and applications Refer to previous instances and findings to determine the ideal method for gathering data
We are looking for a passionate Lead Software Engineer to join our tech team, to spearhead building high-traffic, highly-scalable, multi-tiered, complex web applications. This person needs to be a hands-on engineer with strong object-oriented design skills and a thorough understanding of common design paradigms. Required Candidate profile Prior experience that gets your closer to being the right fit: Fluent in Java Should have strong Coding, Algorithms and Problem solving skills. Good working knowledge of JVM internals, memory management, garbage collection, throughput, latency, CPU utilization, and networking configuration Experience with distributed systems and their application for building scalable, supportable systems Experience with any of the prevalent NoSQL solutions like HBase, Cassandra, MongoDB, Couchbase, ElasticSearch, etc. Experience with any of the prevalent messaging and queuing technologies like ActiveMQ, RabbitMQ, Kafka, etc. Experience with Test Driven Development using technologies like RSpec, Cucumber, Capybara Preferably over 5 years of experience