Cognologix is a technology and business development firm with a focus on emerging decentralized business models and innovative technologies related to Blockchain, Machine Learning, Conversational Bots, Big Data and Search. We help enterprises to disrupt – both large and medium – by re-imagining their business models and innovate like a start-up. The Cognologix team excels at the ideation, architecture, prototyping and development of cutting edge products.
Job Description: We are seeking passionate engineers experienced in software development using Machine Learning (ML) and Natural Language Processing (NLP) techniques to join our development team in Bangalore, India. We're a fast-growing startup working on an enterprise product - An intelligent data extraction Platform for various types of documents. Your responsibilities: • Build, improve and extend NLP capabilities • Research and evaluate different approaches to NLP problems • Must be able to write code that is well designed, produce deliverable results • Write code that scales and can be deployed to production You must have: • Fundamentals of statistical methods is a must • Experience in named entity recognition, POS Tagging, Lemmatization, vector representations of textual data and neural networks - RNN, LSTM • A solid foundation in Python, data structures, algorithms, and general software development skills. • Ability to apply machine learning to problems that deal with language • Engineering ability to build robustly scalable pipelines • Ability to work in a multi-disciplinary team with a strong product focus
Recko Inc. is looking for data engineers to join our kick-ass engineering team. We are looking for smart, dynamic individuals to connect all the pieces of the data ecosystem. What are we looking for: 3+ years of development experience in at least one of MySQL, Oracle, PostgreSQL or MSSQL and experience in working with Big Data technologies like Big Data frameworks/platforms/data stores like Hadoop, HDFS, Spark, Oozie, Hue, EMR, Scala, Hive, Glue, Kerberos etc. Strong experience setting up data warehouses, data modeling, data wrangling and dataflow architecture on the cloud 2+ experience with public cloud services such as AWS, Azure, or GCP and languages like Java/ Python etc 2+ years of development experience in Amazon Redshift, Google Bigquery or Azure data warehouse platforms preferred Knowledge of statistical analysis tools like R, SAS etc Familiarity with any data visualization software A growth mindset and passionate about building things from the ground up and most importantly, you should be fun to work with As a data engineer at Recko, you will: Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Keep our data separated and secure across national boundaries through multiple data centers and AWS regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. About Recko: Recko was founded in 2017 to organise the world’s transactional information and provide intelligent applications to finance and product teams to make sense of the vast amount of data available. With the proliferation of digital transactions over the past two decades, Enterprises, Banks and Financial institutions are finding it difficult to keep a track on the money flowing across their systems. With the Recko Platform, businesses can build, integrate and adapt innovative and complex financial use cases within the organization and across external payment ecosystems with agility, confidence and at scale. . Today, customer-obsessed brands such as Deliveroo, Meesho, Grofers, Dunzo, Acommerce, etc use Recko so their finance teams can optimize resources with automation and prioritize growth over repetitive and time-consuming tasks around day-to-day operations. Recko is a Series A funded startup, backed by marquee investors like Vertex Ventures, Prime Venture Partners and Locus Ventures. Traditionally enterprise software is always built around functionality. We believe software is an extension of one’s capability, and it should be delightful and fun to use. Working at Recko: We believe that great companies are built by amazing people. At Recko, We are a group of young Engineers, Product Managers, Analysts and Business folks who are on a mission to bring consumer tech DNA to enterprise fintech applications. The current team at Recko is 60+ members strong with stellar experience across fintech, e-commerce, digital domains at companies like Flipkart, PhonePe, Ola Money, Belong, Razorpay, Grofers, Jio, Oracle etc. We are growing aggressively across verticals.
Responsibility Partnering with internal business owners (product, marketing, edit, etc.) to understand needs and develop custom analysis to optimize for user engagement and retention Good understanding of the underlying business and workings of cross functional teams for successful execution Design and develop analyses based on business requirement needs and challenges. Leveraging statistical analysis on consumer research and data mining projects, including segmentation, clustering, factor analysis, multivariate regression, predictive modeling, etc. Providing statistical analysis on custom research projects and consult on A/B testing and other statistical analysis as needed. Other reports and custom analysis as required. Identify and use appropriate investigative and analytical technologies to interpret and verify results. Apply and learn a wide variety of tools and languages to achieve results Use best practices to develop statistical and/ or machine learning techniques to build models that address business needs. Requirements 2 - 4 years of relevant experience in Data science. Preferred education: Bachelor's degree in a technical field or equivalent experience. Experience in advanced analytics, model building, statistical modeling, optimization, and machine learning algorithms. Machine Learning Algorithms: Crystal clear understanding, coding, implementation, error analysis, model tuning knowledge on Linear Regression, Logistic Regression, SVM, shallow Neural Networks, clustering, Decision Trees, Random forest, XGBoost, Recommender Systems, ARIMA and Anomaly Detection. Feature selection, hyper parameters tuning, model selection and error analysis, boosting and ensemble methods. Strong with programming languages like Python and data processing using SQL or equivalent and ability to experiment with newer open source tools. Experience in normalizing data to ensure it is homogeneous and consistently formatted to enable sorting, query and analysis. Experience designing, developing, implementing and maintaining a database and programs to manage data analysis efforts. Experience with big data and cloud computing viz. Spark, Hadoop (MapReduce, PIG, HIVE). Experience in risk and credit score domains preferred.
Database Architect 5 - 6 Years Good Knowledge in Relation and Non-Relational Database To write Complex Queries and Identify problematic queries and provide a Solution Good Hands on database tools Experience in Both SQL and NON SQL Database like SQL Server, PostgreSQL, Mango DB, Maria DB. Etc. Worked on Data Model Preparation & Structuring Database etc.
Job Title : DataScience EngineerWork Location : ChennaiExperience Level : 5+yrsPackage : Upto 18 LPANotice Period : Immediate JoinersIt's a full-time opportunity with our client.Mandatory Skills:Machine Learning,Python,Tableau & SQLJob Requirements:--2+ years of industry experience in predictive modeling, data science, and Analysis.--Experience with ML models including but not limited to Regression, Random Forests, XGBoost.--Experience in an ML engineer or data scientist role building and deploying ML models or hands on experience developing deep learning models.--Experience writing code in Python and SQL with documentation for reproducibility.--Strong Proficiency in Tableau.--Experience handling big datasets, diving into data to discover hidden patterns, using data visualization tools, writing SQL.--Experience writing and speaking about technical concepts to business, technical, and lay audiences and giving data-driven presentations.--AWS Sagemaker experience is a plus not required.
Responsibilities - Responsible for implementation and ongoing administration of Hadoopinfrastructure. - Aligning with the systems engineering team to propose and deploy newhardware and software environments required for Hadoop and to expand existingenvironments. - Working with data delivery teams to setup new Hadoop users. This job includessetting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pigand MapReduce access for the new users. - Cluster maintenance as well as creation and removal of nodes using tools likeGanglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools - Performance tuning of Hadoop clusters and Hadoop MapReduce routines - Screen Hadoop cluster job performances and capacity planning - Monitor Hadoop cluster connectivity and security - Manage and review Hadoop log files. - File system management and monitoring. - Diligently teaming with the infrastructure, network, database, application andbusiness intelligence teams to guarantee high data quality and availability - Collaboration with application teams to install operating system and Hadoopupdates, patches, version upgrades when required. READ MORE OF THE JOB DESCRIPTION QualificationsQualifications - Bachelors Degree in Information Technology, Computer Science or otherrelevant fields - General operational expertise such as good troubleshooting skills,understanding of systems capacity, bottlenecks, basics of memory, CPU, OS,storage, and networks. - Hadoop skills like HBase, Hive, Pig, Mahout - Ability to deploy Hadoop cluster, add and remove nodes, keep track of jobs,monitor critical parts of the cluster, configure name node high availability, scheduleand configure it and take backups. - Good knowledge of Linux as Hadoop runs on Linux. - Familiarity with open source configuration management and deployment toolssuch as Puppet or Chef and Linux scripting. Nice to Have - Knowledge of Troubleshooting Core Java Applications is a plus.
Data Engineer• Drive the data engineering implementation• Strong experience in building data pipelines• AWS stack experience is must• Deliver Conceptual, Logical and Physical data models for the implementationteams.• SQL stronghold is must. Advanced SQL working knowledge and experienceworking with a variety of relational databases, SQL query authoring• AWS Cloud data pipeline experience is must. Data pipelines and data centricapplications using distributed storage platforms like S3 and distributed processingplatforms like Spark, Airflow, Kafka• Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda,Elasticsearch• Ability to use a major programming (e.g. Python /Java) to process data formodelling.
Responsibilities :- Define the short-term tactics and long-term technology strategy.- Communicate that technical vision to technical and non-technical partners, customers and investors.- Lead the development of AI/ML related products as it matures into lean, high performing agile teams.- Scale the AI/ML teams by finding and hiring the right mix of on-shore and off-shore resources.- Work collaboratively with the business, partners, and customers to consistently deliver business value.- Own the vision and execution of developing and integrating AI & machine learning into all aspects of the platform.- Drive innovation through the use of technology and unique ways of applying it to business problems.Experience and Qualifications :- Masters or Ph.D. in AI, computer science, ML, electrical engineering or related fields (statistics, applied math, computational neuroscience)- Relevant experience leading & building teams establishing technical direction- A well-developed portfolio of past software development, composed of some mixture of professional work, open source contributions, and personal projects.- Experience in leading and developing remote and distributed teams- Think strategically and apply that through to innovative solutions- Experience with cloud infrastructure- Experience working with machine learning, artificial intelligence, and large datasets to drive insights and business value- Experience in agents architecture, deep learning, neural networks, computer vision and NLP- Experience with distributed computational frameworks (YARN, Spark, Hadoop)- Proficiency in Python, C++. Familiarity with DL frameworks (e.g. neon, TensorFlow, Caffe, etc.)Personal Attributes :- Excellent communication skills- Strong fit with the culture- Hands-on approach, self-motivated with a strong work ethic- Ability to learn quickly (technology, business models, target industries)- Creative and inspired.Superpowers we love :- Entrepreneurial spirit and a vibrant personality- Experience with lean startup build-measure-learn cycle- Vision for AI- Extensive understanding of why things are done the way they are done in agile development.- A passion for adding business valueNote: Selected candidate will be offered ESOPs too.Employment Type : Full TimeSalary : 8-10 Lacs + ESOPFunction : Systems/Product SoftwareExperience : 3 - 10 Years
Strong exposure in ETL / Big Data / Talend / Hadoop / Spark / Hive / Pig To be considered as a candidate for a Senior Data Engineer position, a person must have a proven track record of architecting data solutions on current and advanced technical platforms. They must have leadership abilities to lead a team providing data centric solutions with best practices and modern technologies in mind. They look to build collaborative relationships across all levels of the business and the IT organization. They possess analytic and problem-solving skills and have the ability to research and provide appropriate guidance for synthesizing complex information and extract business value. Have the intellectual curiosity and ability to deliver solutions with creativity and quality. Effectively work with business and customers to obtain business value for the requested work. Able to communicate technical results to both technical and non-technical users using effective story telling techniques and visualizations. Demonstrated ability to perform high quality work with innovation both independently and collaboratively.